@DevOpsSummit Authors: Ram Sonagara, Yeshim Deniz, Elizabeth White, Roger Strukhoff, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers

@DevOpsSummit: Article

Docker Containers and #Microservices | @DevOpsSummit #DevOps #Docker

How Docker has transformed continuous delivery and how Linux and Windows can further accelerate our digital future

A History of Docker Containers and the Birth of Microservices
by Scott Willson

From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'.

In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a single deployment, no, the installation and configuration work was repeated over and over again, not only for each software release but for each and every environment that we promote applications to until finally being deposited into production, where this we repeat this exercise one last time.

What struck me in 2013 was that these monolithic apps were overwhelmingly being deployed inside virtual machines (VMs). Whether the targeted environment was for Development, QA or Production, VMs were the deployment endpoint that hosted the applications.

Promoting the VM image
At that time, I thought that it would save considerable time and effort if the VM image is directly promoted instead of the myriad number application artifacts. Think of it; I told people. IT personnel need only perform update and configuration tasks once, then after the application execution proves stable, the VM image can then be promoted up the delivery pipeline as a ready-to-run container. Conceptually, all that was needed was for an IT professional to make a few network changes to the VM each step along the way, and then swap out the older versioned VM image for the new image.

It sounded simple enough. As is often the case, reality turned out to be more difficult. The problem was that VM images were too big to be considered conveniently deployable artifacts, and there were more changes needed to the VMs than simple network settings, such as infrastructure, security and storage properties.

Though using a VM image as a transportable application container wasn't feasible at the time, I fell in love with the idea of being able to promote an immutable package that was tested, verified and warranted, rather than deploying numerous files that required various configuration changes. What I didn't realize at the time was that Linux kernel partitioning would provide the foundation for fulfilling my vision.

Docker is born
The Linux kernel had been undergoing changes along these lines since 2006 but had matured enough for a company called, dotCloud to create the Docker project in 2010. Docker wasn't just providing a framework for spinning up and spinning down Linux kernel partitions, or containers; they were focused on using these containers as stateless app-in-a-box hosts for single applications. Docker containers were set to change fundamentally the way applications are architected, developed and delivered.

In 2013 Docker Inc. was born, and in 2016, the Docker Data Center and Docker Cloud have come online. Docker provides an abstraction layer to Linux containers, which guarantees the runtime environment exposed to an application will be identical no matter where the container is hosted and runs as long as the container is running in a Docker host. The Docker image is now the immutable package that can be promoted up through the Continuous Delivery pipeline and can safely enable Continuous Deployment.


Since containers are an isolated process space running inside the Linux OS (figure 1), their "boot time" is measured in seconds if not milliseconds. Swapping in and out new images for old ones happens virtually instantaneously, and Docker images are small enough to reside within versioned repositories meaning rolling back a failed deployment is easy and nearly instantaneous. If an error is detected post-deployment, simply swap out the current image with the previous version.

Microsoft joins the ‘containerology' party
Microsoft also recognized the benefit of containerology - the architecting, developing, hosting, and deploying of containerized applications. In 2015, they announced that Windows too will offer container technology (Figure 2).

Windows containers come in two runtime flavors, Windows Server Core and Nano Server. Windows containers also provide two different types of isolation. A Windows Server Container is like its Linux counterpart, in that it is an isolated process space running inside the Windows OS. Additionally, like Linux, all containers share the same kernel. However, Microsoft offers a second, more secure, version of a container called a Hyper-V container. In a Hyper-V container, the kernel the application interacts with is a virtual kernel, and not the OS's actual kernel. In other words, Hyper-V containers are completely isolated from one another even down to the kernel level.


Not only has Microsoft jumped on the container bandwagon, but they also shared the vision of Docker's application focused model for containers. Microsoft partnered with Docker, and as a result, one can run Linux or Windows containers with Docker. Being able to run applications in either Linux or Windows hosted containers will provide companies flexibility and reduce any refactoring costs associated with rewriting, tweaking or re-architecting existing applications.

The bold new world that containerology will take us to is that of microservices. In my opinion, microservices (specifically as enabled by Docker) represent the first feasible step towards mechanized or industrialized applications. In the mechanical engineering world, complex systems were built buying off the shelf components and widgets. In contrast, the software world was accustomed to fabricating every part needed to built complex applications.

Microsoft and the Object Management Group attempted to address this problem by defining COM and CORBA respectively, however, these had their challenges and neither standard ever fully realized a universal market of reusable components that any developer could assemble to build any application on any platform. I am not going to go into SOA or SOAP in this article, but suffice it to say, the software industry has tried and failed to deliver anything that approached the standardization, and standardized tooling of the manufacturing sector.

How microservices can revolutionize app development
Microservices can change that. Each microservice offers a single focused application that performs a particular function. A Docker container provides an immutable, portable and stateless package for microservices. Docker container images can be shipped, shared and versioned as well as be used as a foundation for building new containers. Docker Hub provides ready to use images that can be downloaded and assembled into more complex applications.

Need the software equivalent of an actuator, a cog, wheel or gear? As of today, and going forward one will be able to download desired "prefab" components rather than having to build each and every widget, component or interface from scratch. Docker has addressed security concerns with this level of sharing and reuse with their Content Trust. Docker Content Trust makes it possible to verify the publisher of Docker images and guarantees the contents of the image.

We are heading into yet another technology transformation which is both exciting and challenging - it always is. The word 'disruptive' has come into vogue of late, but the when hasn't the software industry been disruptive? Look what the invention of the spreadsheet did to floors of accounting departments.

Word processors, ERP systems, RDBMSs, smart phones, the Internet. The list goes on, and will continue to go on - change is the norm in the world of technology and especially software. I share Docker's vision, a world of downloadable, reusable and adaptable components that can be used to assemble sophisticated or complex applications. I hope that the need to continually reinvent the wheel will become more of an exception than the rule in the future.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading these essential tips, please take a moment and watch this brief video from Sandy Carter.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expensive intermediate processes from their businesses. FinTech brings efficiency as well as the ability to deliver new services and a much improved customer experience throughout the global financial services industry. FinTech is a natural fit with cloud computing, as new services are quickly developed, deployed, and scaled on public, private, and hybrid clouds.