@DevOpsSummit Authors: Yeshim Deniz, Stefana Muller, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Feed Post

Microeconomics and Application Performance By @Ruxit | @DevOpsSummit [#DevOps]

Supply shouldn’t be viewed simply as a measure of hardware capacity

How Microeconomics Help Boost Application Performance - Part II

In my last post, I talked about how I keep my approach to application performance simple – I use my one semester’s worth of microeconomics knowledge to continuously evaluate the supply and demand sides of my application architecture. What can go wrong, right? :-)

One of the points we’ll explore further in this post is that supply shouldn’t be viewed simply as a measure of hardware capacity. Better to view supply as that which is demanded (see what I did there…?). In a complex environment, supply can be measured as the number of connections, available SOA services, process space, and hardware. Here’s a good visualization of what I’m talking about:


This application (i.e., website) makes demands on two services. One service is a webserver powered by Apache. The other is an application front-end powered by Tomcat. Each service has its own job to do. In turn, the application components make their demands on each subsequent asset in the chain, all the way down to the network infrastructure.

So instead of thinking of supply simply as available cycles and memory, I think of supply as what’s demanded at each layer. I then reefine supply requirements based on these demands.

In the example below you see a list of the web requests requested by a browser, along with the response time of each request. Viewing this from an inventory standpoint, can I supply the Web requests the customer is requesting? Do I have the right Web requests to deliver? Can I get the requests to the customer in time?


Looking at this example, I better have a lot of instances of /orange.jsf available and I’d better be able to deliver them quickly.

My analogy begins to stretch a little thin at this point. I obvously don’t have a stockroom with a bucket of web requests – Instead I have a program that runs in a container. The program is the code (i.e., inventory) that is executed while the container controls (via logistics) how I get to that inventory. Knowing this, I now need to view my supply through a higher level of metrics:


My code is really made up of requests to other services. Here the evaluation process begins anew: do I have an adequate supply of JourneyService available? What’s most interesting here is CPU time. Remember that the Tomcats were hosted across 4 machines? This means that the supply is robust and scalable. What would happen if we hosted the cluster across 6 or 8 machines?

To answer this question we must finally address the supply = infrastructure part of the conversation. Looking at the processes that support this service, you can see that there is plenty of supply, indicated by low processor utilization.


In each instance, there is plenty of capacity supply, so adding more won’t help. However, clustering on faster boxes will improve speed. Though only marginally because this service demands more of other services than hosts.

Returning to my inventory analogy, having something in stock is of no use if you can’t put your hands on it. That’s what the container does; it manages access to resources.

I’ve talked a lot about the supply of cycles and services, but memory supply is important, too. Assuming that supply is finite, the conversation eventually reverts back to using available supply as wisely as possible. For example, garbage collection is a process that helps manage memory supply. Continuing with the stockroom and inventory analogy, GC is equivalent to wanting to give a customer an answer regarding their request for a product and putting them on hold while you research whether or not you carry the requested product, and if so, where is the product stored. In terms of network performance, this is the negative impact of garbage collection, commonly known as suspension time.


Talking about complex applications and environments using technology-specific jargon can create confusion. Even in this short article, it was an easy walk from simple concepts to complicated metrics.  It can be easy to lose sight of the simplicity your goals too, but keep the faith and remember these guidelines:

  • View supply in the context of what is being demanded.
  • Each supplier in a stack either does its own work or calls on something else to do work. The work it does itself can be considered infrastructure supply.

Infrastructure supply comes in two forms – more and faster. Running out of capacity? Add more. Running slow? Add speed. Just watch out for the cardinal sin of adding more when the application will only benefit from faster.

The post How microeconomics help boost application performance, Part II appeared first on The ruxit blog.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

@DevOpsSummit Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
An edge gateway is an essential piece of infrastructure for large scale cloud-based services. In his session at 17th Cloud Expo, Mikey Cohen, Manager, Edge Gateway at Netflix, detailed the purpose, benefits and use cases for an edge gateway to provide security, traffic management and cloud cross region resiliency. He discussed how a gateway can be used to enhance continuous deployment and help testing of new service versions and get service insights and more. Philosophical and architectural approaches to what belongs in a gateway vs what should be in services were also discussed. Real examples of how gateway services are used in front of nearly all of Netflix's consumer facing traffic showed how gateway infrastructure is used in real highly available, massive scale services.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.