Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Open Source Cloud, FinTech Journal

@DevOpsSummit: Blog Feed Post

Docker and DevOps By @AppDynamics | @DevOpsSummit [#DevOps]

Unless you have been living under a rock the last year, you have probably heard about Docker

Docker and DevOps: Why it Matters
By Dustin Whittle

Unless you have been living under a rock the last year, you have probably heard about Docker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos on DockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM

Want to take a test drive of AppDynamics? It has never been easier by using Docker to deploy a complex distributed application with application performance management built-in via the AppDynamics Docker repos.

Find out more and get started with Docker today.

The post Docker and DevOps: Why it Matters written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.