Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Linux Containers, SDN Journal

@DevOpsSummit: Blog Post

The MacVittie-Roberts Wall of DOOM | @DevOpsSummit [#DevOps]

DevOps isn't just about getting an application to production faster

The MacVittie-Roberts Wall of DOOM

Performance. Speed. Velocity. Quality of experience.

No matter what particular turn of phrase we use to describe it, the reality is that we'll try a whole lot of things if it promises to improve application performance. Entire markets have been dedicated to this overriding unqualified principle: faster is better.

That implies, however, that we know what faster means. Faster is relative to some baseline; some measurement that's been taken either on our applications or our competitors. Faster means improving existing performance, which necessarily implies that at some point we've actually measured that performance.

betting on performance

Unfortunately, as much as we chide developers for just throwing applications over the wall to operations, all too often operations just tosses that same application environment over another wall to the end user. And leaves it there.

This is something that fostered conversation a few months ago on Twitter thanks to the revelation from a study sponsored by Germain Software, LLC.indicating that "43% of mission-critical apps experience performance issues once a week."

wall of doom

Now while we kidding around about naming this wall the "MacVittie-Roberts wall of DOOM", we weren't kidding about the need for better performance monitoring in production.

After all, to definitely say that application performance is faster requires that we've measured performance after some change.

You might notice that various forms of "measure" are bold. I could also italicize them, if it makes it would more clearly emphasize that measuring performance is critical to both improving and maintaining it.

That means measuring before and after measures are put in place to improve performance to ensure you didn't possible, oh I don't know, degrade performance.

DevOps isn't just about getting an application to production faster. Oh, that's a big driver right now and the value inherent in doing so can absolutely justify an investment in going "DevOps" on your operational status quo. But just as important is the ability for DevOps to adjust, to tweak, to tune, to fix issues that arise in production in a more efficient manner.

To do that requires awareness that there exists a problem, and when it comes to performance that means monitoring (measuring) performance in post-deployment (production) environments.

Performance remains a top line concern, and therefore we should be mindful of ensuring that the applications which we are tasked with deploying are as fast as they possibly can be. But we can't do that unless we know how different settings, devices and services are impacting performance. Which means continuous measurement.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.