Welcome!

@DevOpsSummit Authors: Liz McMillan, Elizabeth White, AppNeta Blog, Derek Weeks, Pat Romanski

Blog Feed Post

4 Reasons Why You Should Use APM When You Load Test Your Website

image_pdfimage_print

I wouldn’t do website load/performance testing any more without having an APM tool in place. Period. Full stop. End of story.

I’ve been involved in website load testing for over 10 years, as a “end-user” when I was web operations manager for an online job board, as a team leader for a company providing cloud load testing services, and as a consultant on web performance with my own company DevOpsGuys. The difference in the value you get from load/performance testing with and without APM tools is enormous.

We’ve probably all seen those testing reports that are full of graphs of response time versus req/sec, CPU utilisation curves, disk IO throughput, error rates ad nauseam. I, to my eternal shame, have even written them… and whilst they are useful for answering the (very simplistic) question of “how many simulated requests/users can my website support before it falls over?” generating any real application insight from what are essentially infrastructure metrics is difficult. This type of test report rarely results in any corrective actions other than (1) “lets throw more hardware at it” or (2) “let’s shout at the devs that they have to fix something because the application is slow”.  Quite often the report gets circular filed because no-one knows how to derive application insight and hence generate meaningful corrective actions at either the code, application stack configuration or infrastructure level. All that effort & expense is wasted.

So how are things different when using APM tools (like my preferred tool, AppDynamics)? Here are my top 4 reasons:

1. See the Big Picture (Systems Thinking)

“Systems thinking is a framework for seeing interrelationships rather than things, for seeing patterns rather than static snapshots.”  - Peter Senge, “The Fifth Discipline”.

The “first way of DevOps” is systems thinking, and APM tools reinforce the systems thinking perspective by helping you see the big picture very clearly. You can see the interrelationships between the web tier, application tier, database servers, message queues, external cloud services etc. in real time while you’re testing rather than being focussed on the metrics for each tier individually. You can instantly see where the bottlenecks in your application are in the example below the 4306ms calls to Navision stand out!

FlowMap

2. Drill Down to the Code Level

One of my favourite things when load testing with APM tools is being able to drill down to the stack trace level and identify the calls that are the most problematic. Suddenly, instead of talking about infrastructure metrics like CPU, RAM and Disk we are talking about application metrics — this business transaction (e.g. web page or API request) generates this flow across the application and 75% of the time is spent in this method call which makes 3 database calls and 2 web service calls and its this database call that’s slow and here’s the exact SQL statement that was executed. The difference in the response you get from the developer’s when you give them this level of detail compared to “your application is slow when we hit 200 users” is fantastic — now you are giving them real, pinpoint actionable intelligence on how their application responds under load.

DrillDown

 3. Iterate Faster

“the application was made 56x faster during a 12hr testing window”

Because you can move quickly to the code level in real-time while you test and because this facilitates better communication with the development team your load testing suddenly becomes a lot more collaborative, even if the load testing is being performed by an external 3rd party.

We generally have all the relevant parties on a conference call or HipChat chat session while we test and we are constantly exchanging information, screenshots, links to APM snapshots and the developers are often able to code fixes there and then because we can rapidly pinpoint the pain points.

If you’ve got a customer with an Agile mindset and continuous delivery capability it can enable you to do rapid test and fix cycles during testing, often multiple times times in a day. In one notable example, the application was made 56x faster during a 12hr testing window due to 4 application releases during that period.

56xFaster

4. Stop the “Blame Game”

“make the enemy poor performance, not each other…”

Traditionally in the old school (pre-APM tools) days, load tests were often conducted by external load testing consultancies who would come in, do the testing, and then deliver some big report on how things went.

The customer would assemble their team together in a conference room to go through the report, which often triggered the “blame game” – Ops blaming Dev, Dev blaming QA, QA blaming Ops, Ops blaming the hosting provider, the hosting provider blaming the customer’s code and around and around it would go.

But with the right APM tools in place we’ve found this negative team dynamic can be avoided.

As mentioned earlier, testing tends to become more collaborative because it’s easier to share the performance data in real time via the APM tool, and discussions become more evidence-based. It’s more about “what are we going to do about this problem we can see here in the APM tool” and less about trying to allocate blame when no-one really knows where the problem actually resides and they don’t want to be left holding the can. The system-thinking, holistic view of the application’s performance promulgated by the APM tool makes performance the enemy, not each other. And that means that the performance issues are likely to be fixed faster, and not ignored due to politics and infighting.

There are probably loads more reasons you can come up with for why load testing with APM tools are awesome (and I’d love you hear your thoughts in the comments), but I will leave you with one more bonus reason – because it’s fun. For me, using AppDynamics when I’m doing load testing and performance tuning has really bought the fun factor back into the work. It’s fun to see the load being applied to the system and to see (via AppDynamics) the effect across the entire application. It’s fun to work closer with the Dev & Ops teams (dare I say, “DevOps”!) and to share meaningful, actionable insights on where the problems lie, and it’s fun be able to rapidly iterate and show the performance improvements in real-time.

The post 4 Reasons Why You Should Use APM When You Load Test Your Website written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By Jyoti Bansal

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@DevOpsSummit Stories
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, Capital One now has 500+ Agile Teams delivering quality software via Agile and DevOps practices.
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 developers, designers, quality assurance engineers, project managers in house, specializing in the world-class mobile and web development.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud using both containers and VMs.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @DevOpsSummit 19th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will explore this emerging use of Big Data generated by the digital business to complete the DevOps feedback loop, and inform operational and application decisions.
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed for digital business, Catchpoint is the only end-user experience monitoring (EUM) platform that can simultaneously capture, index and analyze object level performance data inline across the most extensive monitor types and node coverage, enabling a smarter, faster way to preempt issues and optimize service delivery. More than 350 customers in over 30 countries trust Catchpoint to strengthen their ...
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between what is available in the public cloud and the early private clouds?
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture.
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integration in general, shared the challenges they ran into using open source technologies like Apache Mesos and Marathon for Docker containers and what solutions they found to deal with them.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the works because of misaligned incentives.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers and the benefits. In addition, he discussed known issues and solutions for enterprise applications in containers.
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also discuss how the "Ops" side of DevOps is making their life easier and becoming invisible to developers for storage-related provisioning and application performance.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Francisco, is developing the next generation of cloud monitoring required for microservices and DevOps.
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally friendly solutions available on the market.
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to change lives by teaching Linux and cloud technology to the tens of thousands of students that learn at the Linux Academy.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Every successful software product evolves from an idea to an enterprise system. Notably, the same way is passed by the product owner's company. In his session at 20th Cloud Expo, Oleg Lola, CEO of MobiDev, will provide a generalized overview of the evolution of a software product, the product owner, the needs that arise at various stages of this process, and the value brought by a software development partner to the product owner as a response to these needs.
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
"Splunk basically takes machine data and we make it usable, valuable and accessible for everyone. The way that plays in DevOps is - we need to make data-driven decisions to delivering applications," explained Andi Mann, Chief Technology Advocate at Splunk and @DevOpsSummit Conference Chair, in this SYS-CON.tv interview at @DevOpsSummit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.