Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Four Key Takeaways for Application Performance and Monitoring | @DevOpsSummit #APM #DevOps

The latest Guide to Performance & Monitoring covers the verifiable & unknowable sides of building & maintaining performant apps

Designing for performance is absolutely essential; but runtime is so crazy a variable that we can reasonably blame too-early optimization for a non-negligible chunk of lousy UX and unmaintainable code.

The latest Guide to Performance and Monitoring covers both the static and dynamic, the verifiable and the unknowable sides of building and maintaining performant applications.

As Tony Hoare notoriously observed, "Premature optimization is the root of all evil:" that is, the benefits of absolutely maximal optimization are usually much lower than the increased cost of maintenance and debugging that results from the brittleness caused by that optimization. On the other hand, the natural tendency of OOP to prioritize form over performance can generate a codebase that is highly readable but partitioned such that performance-oriented refactoring may prove extremely difficult. To help you steer between the Scylla of overeager optimization and the Charybdis of runtime-indifferent code structure, we've split this publication between ways to design performant systems and ways to monitor performance in the real world. To shed light on how developers are approaching application performance, and what performance problems they encounter (and where, and at what frequency), we present the following points in summary of the four most important takeaways of our research.

1) Application code is most likely to cause performance problems frequently; database performance problems are most challenging to fix:

DATA: Frequent performance issues appear most commonly in application code (43% of respondents) and in databases second most commonly (27%). Challenging performance issues are most likely to appear in the database (51%) and second in application code (47%).

IMPLICATIONS: Enterprise application performance is most likely to suffer from higher-level, relatively shallow suboptimalities. Deep understanding of system architecture, network topology, and even pure algorithm design is not required to address most performance issues.

RECOMMENDATIONS: Optimize application code first and databases second (all other things being equal). On first optimization pass, assume that performance problems can be addressed without investing in superior infrastructure.

2) Parallelization is regularly built into program design by a large minority (but still a minority) of enterprise developers:

DATA: 43% of developers regularly design programs for parallel execution. Java 8 Parallel Streams are often used (18%), slightly more frequently than ForkJoin (16%). ExecutorService was most popular by far, with 47% using it often. Race conditions and thread locks are encountered monthly by roughly one fifth of developers (21% and 19% respectively). Of major parallel programming models, only multithreading is often used by more than 30% of developers (81%).

IMPLICATIONS: Enterprise developers do not manage parallelization aggressively. Simple thread pool management (ExecutorService) is much more commonly used for concurrency than upfront work splitting (ForkJoin), which suggests that optimization for multicore processors can be improved.

RECOMMENDATIONS: More deliberately model task and data parallelization, and consider hardware threading more explicitly (and without relying excessively on synchronization wrappers) when designing for concurrency.

3) Performance is still a second-stage design consideration, but not by much:

DATA: 56% of developers build application functionality first, then worry about performance.

IMPLICATIONS: Extremely premature optimization is generally recognized as poor design, but performance considerations are serious enough that almost half of developers do think about performance while building functionality.

RECOMMENDATIONS: Distinguish architectural from code-level performance optimizations. Set clear performance targets (preferably cascading from UX tolerance levels) and meet them. Optimize for user value, not for the sake of optimization.

4) Manual firefighting, lack of actionable insights, and heterogeneous IT environments are the top three monitoring challenges:

DATA: 58% of respondents count firefighting and manual processes among the top three performance management challenges. 49% count lack of actionable insights to proactively solve issues. 47% count rising cost and complexity of managing heterogeneous IT environment.

IMPLICATIONS: Performance management is far from a solved problem. Monitoring tools and response methods are not providing insights and solutions effectively, whether because they are not used adequately or need feature refinement.

RECOMMENDATIONS: Measure problem location, frequency, and cost, and compare with the cost (both monetary and performance overhead) of an additional management layer. Consider tuning existing monitoring systems or adopting new systems (e.g. something more proactive than logs).

More Stories By John Esposito

John Esposito is Editor-in-Chief at DZone, having recently finished a doctoral program in Classics from the University of North Carolina. In a previous life he was a VBA and Force.com developer, DBA, and network administrator. John enjoys playing piano and looking at diagrams, and raises two cats with his wife, Sarah.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.