Click here to close now.




















Welcome!

@DevOpsSummit Authors: Elizabeth White, SmartBear Blog, Georgiana Comsa, Jason Bloomberg, Liz McMillan

Blog Feed Post

4 Reasons Why You Should Use APM When You Load Test Your Website

image_pdfimage_print

I wouldn’t do website load/performance testing any more without having an APM tool in place. Period. Full stop. End of story.

I’ve been involved in website load testing for over 10 years, as a “end-user” when I was web operations manager for an online job board, as a team leader for a company providing cloud load testing services, and as a consultant on web performance with my own company DevOpsGuys. The difference in the value you get from load/performance testing with and without APM tools is enormous.

We’ve probably all seen those testing reports that are full of graphs of response time versus req/sec, CPU utilisation curves, disk IO throughput, error rates ad nauseam. I, to my eternal shame, have even written them… and whilst they are useful for answering the (very simplistic) question of “how many simulated requests/users can my website support before it falls over?” generating any real application insight from what are essentially infrastructure metrics is difficult. This type of test report rarely results in any corrective actions other than (1) “lets throw more hardware at it” or (2) “let’s shout at the devs that they have to fix something because the application is slow”.  Quite often the report gets circular filed because no-one knows how to derive application insight and hence generate meaningful corrective actions at either the code, application stack configuration or infrastructure level. All that effort & expense is wasted.

So how are things different when using APM tools (like my preferred tool, AppDynamics)? Here are my top 4 reasons:

1. See the Big Picture (Systems Thinking)

“Systems thinking is a framework for seeing interrelationships rather than things, for seeing patterns rather than static snapshots.”  - Peter Senge, “The Fifth Discipline”.

The “first way of DevOps” is systems thinking, and APM tools reinforce the systems thinking perspective by helping you see the big picture very clearly. You can see the interrelationships between the web tier, application tier, database servers, message queues, external cloud services etc. in real time while you’re testing rather than being focussed on the metrics for each tier individually. You can instantly see where the bottlenecks in your application are in the example below the 4306ms calls to Navision stand out!

FlowMap

2. Drill Down to the Code Level

One of my favourite things when load testing with APM tools is being able to drill down to the stack trace level and identify the calls that are the most problematic. Suddenly, instead of talking about infrastructure metrics like CPU, RAM and Disk we are talking about application metrics — this business transaction (e.g. web page or API request) generates this flow across the application and 75% of the time is spent in this method call which makes 3 database calls and 2 web service calls and its this database call that’s slow and here’s the exact SQL statement that was executed. The difference in the response you get from the developer’s when you give them this level of detail compared to “your application is slow when we hit 200 users” is fantastic — now you are giving them real, pinpoint actionable intelligence on how their application responds under load.

DrillDown

 3. Iterate Faster

“the application was made 56x faster during a 12hr testing window”

Because you can move quickly to the code level in real-time while you test and because this facilitates better communication with the development team your load testing suddenly becomes a lot more collaborative, even if the load testing is being performed by an external 3rd party.

We generally have all the relevant parties on a conference call or HipChat chat session while we test and we are constantly exchanging information, screenshots, links to APM snapshots and the developers are often able to code fixes there and then because we can rapidly pinpoint the pain points.

If you’ve got a customer with an Agile mindset and continuous delivery capability it can enable you to do rapid test and fix cycles during testing, often multiple times times in a day. In one notable example, the application was made 56x faster during a 12hr testing window due to 4 application releases during that period.

56xFaster

4. Stop the “Blame Game”

“make the enemy poor performance, not each other…”

Traditionally in the old school (pre-APM tools) days, load tests were often conducted by external load testing consultancies who would come in, do the testing, and then deliver some big report on how things went.

The customer would assemble their team together in a conference room to go through the report, which often triggered the “blame game” – Ops blaming Dev, Dev blaming QA, QA blaming Ops, Ops blaming the hosting provider, the hosting provider blaming the customer’s code and around and around it would go.

But with the right APM tools in place we’ve found this negative team dynamic can be avoided.

As mentioned earlier, testing tends to become more collaborative because it’s easier to share the performance data in real time via the APM tool, and discussions become more evidence-based. It’s more about “what are we going to do about this problem we can see here in the APM tool” and less about trying to allocate blame when no-one really knows where the problem actually resides and they don’t want to be left holding the can. The system-thinking, holistic view of the application’s performance promulgated by the APM tool makes performance the enemy, not each other. And that means that the performance issues are likely to be fixed faster, and not ignored due to politics and infighting.

There are probably loads more reasons you can come up with for why load testing with APM tools are awesome (and I’d love you hear your thoughts in the comments), but I will leave you with one more bonus reason – because it’s fun. For me, using AppDynamics when I’m doing load testing and performance tuning has really bought the fun factor back into the work. It’s fun to see the load being applied to the system and to see (via AppDynamics) the effect across the entire application. It’s fun to work closer with the Dev & Ops teams (dare I say, “DevOps”!) and to share meaningful, actionable insights on where the problems lie, and it’s fun be able to rapidly iterate and show the performance improvements in real-time.

The post 4 Reasons Why You Should Use APM When You Load Test Your Website written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@DevOpsSummit Stories
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean, destroying it completely when you log out. If you wish to store your data, the solution will inclu...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction...
eCube Systems has released NXTmonitor, a full featured application orchestration solution. NXTmonitor, which inherited the code base of NXTminder, has been extended to support multi-discipline processes and will act as a DevOps utility in a heterogeneous enterprise environment. Previously, NXTminder was packaged with NXTera middleware to configure and manage Entera and NXTera RPC servers. “Since we are widening the focus of this solution to DevOps, we felt the need to change the name to NXTmonitor to accurately reflect the operations monitoring features it provides,” says Kevin Barnes, Presi...
Everyone talks about continuous integration and continuous delivery but those are just two ends of the pipeline. In the middle of DevOps is continuous testing (CT), and many organizations are struggling to implement continuous testing effectively. After all, without continuous testing there is no delivery. And Lab-As-A-Service (LaaS) enhances the CT with dynamic on-demand self-serve test topologies. CT together with LAAS make a powerful combination that perfectly serves complex software development and delivery pipelines. Software Defined Networks (SDNs) turns the network into a flexible confi...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on demos and comprehensive walkthroughs.
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery cycles and a poor customer experience.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a practical, powerful, and insanely valuable asset to enterprises. You may call it DevSecOps, or SecDevOps...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for diverse candidates and taking feedback from the audience on their experiences with encouraging diver...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approach. NoOps enables developers to deploy, manage, and scale their own code, creating an infrastructure...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will demonstrate examples of...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usage of their services for licensing and billing purposes? In his session at 16th Cloud Expo, Delano ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block - and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming onto the cloud hosting scene over 10 years ago. So, what has changed? In his session at 16th Cloud Ex...
XebiaLabs has announced that XL Deploy, its Application Release Automation software, has received certification of its integration with ServiceNow. With XL Deploy from XebiaLabs, ServiceNow users can now easily automate the application deployment process so releases can occur in a repeatable, standard and efficient way leading to faster delivery of software at enterprise scale. XL Deploy also enables companies to reduce the risk of release failures, while providing comprehensive reporting and supporting IT compliance. Certification by ServiceNow signifies that XL Deploy has successfully co...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at @DevOpsSummit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, presented a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mockups and enhance them all the way through functional prototypes, to final working applications. Learn ...
Automic has been listed as a representative ‘established and active vendor’ in Gartner’s recent Market Guide for Application Release Automation (ARA) Solutions. Gartner has defined categories of ‘established and active’, ‘evolving’ and ‘emerging’ and categorized vendors accordingly. Automic views the growing global DevOps market as a strategic area of focus for the business. The ARA market is, “Driven by growing business demands for rapid (if not continuous) delivery of new applications, features and updates.” Furthermore, “enterprise infrastructure and operations (I&O) leaders invest in ARA...
Graylog, Inc., has added the capability to collect, centralize and analyze application container logs from within Docker. The Graylog logging driver for Docker addresses the challenges of extracting intelligence from within Docker containers, where most workloads are dynamic and log data is not persisted or stored. Using Graylog, DevOps and IT Ops teams can pinpoint the root cause of problems to deliver new applications faster and minimize downtime.