Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers

@DevOpsSummit: Article

Docker Containers and #Microservices | @DevOpsSummit #DevOps #Docker

How Docker has transformed continuous delivery and how Linux and Windows can further accelerate our digital future

A History of Docker Containers and the Birth of Microservices
by Scott Willson

From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'.

In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a single deployment, no, the installation and configuration work was repeated over and over again, not only for each software release but for each and every environment that we promote applications to until finally being deposited into production, where this we repeat this exercise one last time.

What struck me in 2013 was that these monolithic apps were overwhelmingly being deployed inside virtual machines (VMs). Whether the targeted environment was for Development, QA or Production, VMs were the deployment endpoint that hosted the applications.

Promoting the VM image
At that time, I thought that it would save considerable time and effort if the VM image is directly promoted instead of the myriad number application artifacts. Think of it; I told people. IT personnel need only perform update and configuration tasks once, then after the application execution proves stable, the VM image can then be promoted up the delivery pipeline as a ready-to-run container. Conceptually, all that was needed was for an IT professional to make a few network changes to the VM each step along the way, and then swap out the older versioned VM image for the new image.

It sounded simple enough. As is often the case, reality turned out to be more difficult. The problem was that VM images were too big to be considered conveniently deployable artifacts, and there were more changes needed to the VMs than simple network settings, such as infrastructure, security and storage properties.

Though using a VM image as a transportable application container wasn't feasible at the time, I fell in love with the idea of being able to promote an immutable package that was tested, verified and warranted, rather than deploying numerous files that required various configuration changes. What I didn't realize at the time was that Linux kernel partitioning would provide the foundation for fulfilling my vision.

Docker is born
The Linux kernel had been undergoing changes along these lines since 2006 but had matured enough for a company called, dotCloud to create the Docker project in 2010. Docker wasn't just providing a framework for spinning up and spinning down Linux kernel partitions, or containers; they were focused on using these containers as stateless app-in-a-box hosts for single applications. Docker containers were set to change fundamentally the way applications are architected, developed and delivered.

In 2013 Docker Inc. was born, and in 2016, the Docker Data Center and Docker Cloud have come online. Docker provides an abstraction layer to Linux containers, which guarantees the runtime environment exposed to an application will be identical no matter where the container is hosted and runs as long as the container is running in a Docker host. The Docker image is now the immutable package that can be promoted up through the Continuous Delivery pipeline and can safely enable Continuous Deployment.

Container-Map.jpg

Since containers are an isolated process space running inside the Linux OS (figure 1), their "boot time" is measured in seconds if not milliseconds. Swapping in and out new images for old ones happens virtually instantaneously, and Docker images are small enough to reside within versioned repositories meaning rolling back a failed deployment is easy and nearly instantaneous. If an error is detected post-deployment, simply swap out the current image with the previous version.

Microsoft joins the ‘containerology' party
Microsoft also recognized the benefit of containerology - the architecting, developing, hosting, and deploying of containerized applications. In 2015, they announced that Windows too will offer container technology (Figure 2).

Windows containers come in two runtime flavors, Windows Server Core and Nano Server. Windows containers also provide two different types of isolation. A Windows Server Container is like its Linux counterpart, in that it is an isolated process space running inside the Windows OS. Additionally, like Linux, all containers share the same kernel. However, Microsoft offers a second, more secure, version of a container called a Hyper-V container. In a Hyper-V container, the kernel the application interacts with is a virtual kernel, and not the OS's actual kernel. In other words, Hyper-V containers are completely isolated from one another even down to the kernel level.

Windows-Service-Docker-Engine.jpg

Not only has Microsoft jumped on the container bandwagon, but they also shared the vision of Docker's application focused model for containers. Microsoft partnered with Docker, and as a result, one can run Linux or Windows containers with Docker. Being able to run applications in either Linux or Windows hosted containers will provide companies flexibility and reduce any refactoring costs associated with rewriting, tweaking or re-architecting existing applications.

The bold new world that containerology will take us to is that of microservices. In my opinion, microservices (specifically as enabled by Docker) represent the first feasible step towards mechanized or industrialized applications. In the mechanical engineering world, complex systems were built buying off the shelf components and widgets. In contrast, the software world was accustomed to fabricating every part needed to built complex applications.

Microsoft and the Object Management Group attempted to address this problem by defining COM and CORBA respectively, however, these had their challenges and neither standard ever fully realized a universal market of reusable components that any developer could assemble to build any application on any platform. I am not going to go into SOA or SOAP in this article, but suffice it to say, the software industry has tried and failed to deliver anything that approached the standardization, and standardized tooling of the manufacturing sector.

How microservices can revolutionize app development
Microservices can change that. Each microservice offers a single focused application that performs a particular function. A Docker container provides an immutable, portable and stateless package for microservices. Docker container images can be shipped, shared and versioned as well as be used as a foundation for building new containers. Docker Hub provides ready to use images that can be downloaded and assembled into more complex applications.

Need the software equivalent of an actuator, a cog, wheel or gear? As of today, and going forward one will be able to download desired "prefab" components rather than having to build each and every widget, component or interface from scratch. Docker has addressed security concerns with this level of sharing and reuse with their Content Trust. Docker Content Trust makes it possible to verify the publisher of Docker images and guarantees the contents of the image.

We are heading into yet another technology transformation which is both exciting and challenging - it always is. The word 'disruptive' has come into vogue of late, but the when hasn't the software industry been disruptive? Look what the invention of the spreadsheet did to floors of accounting departments.

Word processors, ERP systems, RDBMSs, smart phones, the Internet. The list goes on, and will continue to go on - change is the norm in the world of technology and especially software. I share Docker's vision, a world of downloadable, reusable and adaptable components that can be used to assemble sophisticated or complex applications. I hope that the need to continually reinvent the wheel will become more of an exception than the rule in the future.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.