Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Dalibor Siroky, Stackify Blog

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog, Artificial Intelligence, Server Monitoring, Python, FinTech Journal

@DevOpsSummit: Blog Feed Post

Monitoring Containers | @DevOpsSummit #DevOps #Docker #APM #Monitoring

With the rise of Docker, Kubernetes & other container technologies, the growth of microservices has skyrocketed among dev teams

The Importance of Monitoring Containers
By Kevin Goldberg

With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue.

However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.

If your team is looking towards containers and microservices as an operational change in how you decide to ship your product, you can’t afford bugs or software issues affecting your performance, end-user experience, or ultimately your bottom line.

Ed Moyle, Director of Emerging Business & Technology at ISACA said it best in his blog, “Consider what happens to these issues when containers enter into the mix. Not only are all the VM issues still there, but they’re now potentially compounded. Inventories that were already difficult to keep current because of VM sprawl might now have to accommodate containers, too. For example, any given VM could contain potentially dozens of individual containers. Issues arising from unexpected migration of VM images might be made significantly worse when the containers running on them can be relocated with a few keystrokes.”

Earlier this year, AppDynamics unveiled Microservices iQ to address these visibility issues daunting DevOps teams today.

Infographic – Container Monitoring 101 from AppDynamics

With Microservices iQ, DevOps teams can:

  • Automatic discovery of entry and exit points of your microservice as service endpoints for focused microservices monitoring

  • Track the key performance indicators of your microservice without worrying about the entire distributed business transaction that uses it

  • Drill down and isolate the root cause of any performance issues affecting the microservice

Interested in learning more? Check out our free ebook, The Importance of Monitoring Containers.

The post The Importance of Monitoring Containers [Infographic] appeared first on Application Performance Monitoring Blog | AppDynamics.

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.