Welcome!

@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Post

Optimizing Your Container Environment By @MattKiernan | @DevOpsSummit #DevOps #Containers

Containers ascent into the mainstream forces us to finally address the challenges they present

Optimizing Your Container Environment: Pets vs. Cattle
By Matt Kiernan

In the midst of Docker's meteoric rise and the explosion of talk around containers, it can be easy to lose oneself in all of the new terminology and jargon. While we think about the challenges presented by using containers in production, we also continue to hear the metaphor of Pets vs. Cattle and why it's important to maintain an infrastructure that acts like a herd of cows.

Pets vs. Cattle
What is this pets vs. cattle nonsense we keep hearing? Simply put, the "cattle not pets" mantra suggests that work shouldn't grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle - self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.

A new(ish) set of challenges
Though containers aren't a new technology, their ascent into the mainstream forces us to finally address the challenges presented by containers. Sticking with the livestock analogy, what if something starts preying on your herd? What if specific cows start consuming more than their fair share of resources, or if a group of cows suddenly disappear? And what if more cows join the herd? Will you have to brand each cow individually to keep track of them?

When its comes to using containers, a monitoring solution specifically built for containers is crucial to understanding what's happening across your environment. While it's possible to install a log-collecting agent on every container, a more efficient and scalable approach is to dedicate a custom container to focus explicitly on collecting container logs and stats. With a dedicated logging container that scales to capture data from all containers on a host as they're added, you can correlate container stats with application logs and host logs for an end-to-end view of your environment.

Interested in learning more about optimizing your container environments and you can us container log monitoring to do it? Check out the infographic below!

Logentries_Container_Infographic_2015_v6

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cross-functional purposes. So, when traditional IT Service Management (ITSM) meets the cloud, and equally, DevOps, there is invariably going to be conflict.