@DevOpsSummit Authors: Ram Sonagara, Yeshim Deniz, Elizabeth White, Roger Strukhoff, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Feed Post

Microeconomics and Application Performance By @Ruxit | @DevOpsSummit [#DevOps]

Supply shouldn’t be viewed simply as a measure of hardware capacity

How Microeconomics Help Boost Application Performance - Part II

In my last post, I talked about how I keep my approach to application performance simple – I use my one semester’s worth of microeconomics knowledge to continuously evaluate the supply and demand sides of my application architecture. What can go wrong, right? :-)

One of the points we’ll explore further in this post is that supply shouldn’t be viewed simply as a measure of hardware capacity. Better to view supply as that which is demanded (see what I did there…?). In a complex environment, supply can be measured as the number of connections, available SOA services, process space, and hardware. Here’s a good visualization of what I’m talking about:


This application (i.e., website) makes demands on two services. One service is a webserver powered by Apache. The other is an application front-end powered by Tomcat. Each service has its own job to do. In turn, the application components make their demands on each subsequent asset in the chain, all the way down to the network infrastructure.

So instead of thinking of supply simply as available cycles and memory, I think of supply as what’s demanded at each layer. I then reefine supply requirements based on these demands.

In the example below you see a list of the web requests requested by a browser, along with the response time of each request. Viewing this from an inventory standpoint, can I supply the Web requests the customer is requesting? Do I have the right Web requests to deliver? Can I get the requests to the customer in time?


Looking at this example, I better have a lot of instances of /orange.jsf available and I’d better be able to deliver them quickly.

My analogy begins to stretch a little thin at this point. I obvously don’t have a stockroom with a bucket of web requests – Instead I have a program that runs in a container. The program is the code (i.e., inventory) that is executed while the container controls (via logistics) how I get to that inventory. Knowing this, I now need to view my supply through a higher level of metrics:


My code is really made up of requests to other services. Here the evaluation process begins anew: do I have an adequate supply of JourneyService available? What’s most interesting here is CPU time. Remember that the Tomcats were hosted across 4 machines? This means that the supply is robust and scalable. What would happen if we hosted the cluster across 6 or 8 machines?

To answer this question we must finally address the supply = infrastructure part of the conversation. Looking at the processes that support this service, you can see that there is plenty of supply, indicated by low processor utilization.


In each instance, there is plenty of capacity supply, so adding more won’t help. However, clustering on faster boxes will improve speed. Though only marginally because this service demands more of other services than hosts.

Returning to my inventory analogy, having something in stock is of no use if you can’t put your hands on it. That’s what the container does; it manages access to resources.

I’ve talked a lot about the supply of cycles and services, but memory supply is important, too. Assuming that supply is finite, the conversation eventually reverts back to using available supply as wisely as possible. For example, garbage collection is a process that helps manage memory supply. Continuing with the stockroom and inventory analogy, GC is equivalent to wanting to give a customer an answer regarding their request for a product and putting them on hold while you research whether or not you carry the requested product, and if so, where is the product stored. In terms of network performance, this is the negative impact of garbage collection, commonly known as suspension time.


Talking about complex applications and environments using technology-specific jargon can create confusion. Even in this short article, it was an easy walk from simple concepts to complicated metrics.  It can be easy to lose sight of the simplicity your goals too, but keep the faith and remember these guidelines:

  • View supply in the context of what is being demanded.
  • Each supplier in a stack either does its own work or calls on something else to do work. The work it does itself can be considered infrastructure supply.

Infrastructure supply comes in two forms – more and faster. Running out of capacity? Add more. Running slow? Add speed. Just watch out for the cardinal sin of adding more when the application will only benefit from faster.

The post How microeconomics help boost application performance, Part II appeared first on The ruxit blog.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

@DevOpsSummit Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading these essential tips, please take a moment and watch this brief video from Sandy Carter.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expensive intermediate processes from their businesses. FinTech brings efficiency as well as the ability to deliver new services and a much improved customer experience throughout the global financial services industry. FinTech is a natural fit with cloud computing, as new services are quickly developed, deployed, and scaled on public, private, and hybrid clouds.