Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Open Source Cloud, Agile Computing

@DevOpsSummit: Blog Post

Five Evolving Docker Technologies By @ActiveState | @DevOpsSummit [#DevOps]

With each iteration, existing technologies are bending and adapting according to the new landscape

Since the announcement of Docker approximately 18 months ago there has been an explosion of new technology in this space. Although the list is becoming very long, here I will outline five evolving Docker-related technologies that are driving the direction that cloud technology is going.

1. Kubernetes
At Dockercon this summer Google's VP of Infrastructure, Eric Brewer, announced Kubernetes, which provides a way to orchestrate a collection of Docker containers across a cluster of machines. It is essentially a scheduler, which means it handles running your containers and ensuring uptime, even in the event of losing machines.

We have seen rapid adoption and interest in Kubernetes that goes beyond the buzz around it being a Google cloud technology. There is a need for orchestration at the Operations level that Kubernetes addresses well. A manifest describing a collection of Docker images can be created and pushed into the cluster which automatically deploys and horizontally scales those containers. Kubernetes also provides a way to define a "service", which can be consumed by other applications running in the cluster.

2. Docker Pods
Hand-in-hand with Kubernetes, Eric Brewer also talked about containers and introduced the concept of "pods." This is a key concept within Kubernetes. He said, "At Google we rarely deploy a single container." Instead, they group containers together. For instance, an application process often has several side-car processes for logging and other tasks outside of the concern of the application itself.

One issue he noted with Docker containers is the need for constant mapping of internal and external ports - between what the process inside the Docker container sees and what the external world sees. This is an additional layer of complexity that needs to be managed, stored and queried - even between the containers of a pod that has been deployed as a single unit. Therefore, at Google, they ensure that every pod of containers has its own IP address. This means the ports used can be the same inside and outside of a container. The ports can be baked in at design or build time. This does away with the additional layer of complexity of managing port. Now, to find pods running a particular service, you only need the list of IP addresses of those pods.

Google Compute Engine is currently the only cloud infrastructure service that facilitates assigning an IP subnet to a virtual machine - and hence an IP to each Docker pod within it.

3. Flannel
CoreOS, who are actively involved with Kubernetes, have attempted to solve this problem with something they call Flannel (previously named Rudder). Flannel provides an overlay network on-top of the provided network, which allows for assigning an IP subnet to each machine. There is a performance cost to doing this, but they hope that this will be engineered away as Flannel evolves.

4. Docker for Windows
Recently, Microsoft joined the Docker bandwagon, saying they intended to build a containerization solution for Windows and provide a Docker compatible API on top of this. Although Docker images will unlikely ever be portable between Linux and Windows containers, it does mean that the tooling being built above the Docker API layer will be usable across these operating systems.

Large enterprises are heavily invested in Windows, so this announcement is a major win for the IT departments wanting to steer their ship in the direction of Docker adoption.

5. Cloud Foundry Diego
A major focus of ActiveState is the Cloud Foundry open-source PaaS project. We were the first adopters of this project when VMware announced it. While our solution, Stackato, already uses Docker under the hood, we think the Docker integrations arising from the Diego project is bringing the rest of the ecosystem in right direction.

Like Kubernetes, Diego is a scheduler, but unlike Kubernetes it is runtime environment agnostic and it's built to work primarily with Cloud Foundry. Docker is only one integration for Diego, but it is the sole focus of the developers working on Diego at IBM. We are also seeing Windows .NET integration with Diego from folks such as Uhuru and when Docker for Windows becomes a reality, I am sure we will see these technologies coalesce.

The PaaS integrations with Docker are very important for the future of Docker. We have seen developers being a major driving-force in the rise of Docker. The ability to easily express the environment that their applications should run in has been very powerful for developers. Also, knowing that anyone can pick up those Dockerized applications and immediately run them has filled a void for which a solution did not previously exist. PaaS is an extension of that with an even greater focus on developers, whereas we are seeing other technologies moving Docker away from developers and more into the realm of Operations teams.

Conclusion
This post is only the tip of the iceberg for the number of technologies in this ecosystem. New solutions are being announced almost every week and, with each iteration, existing technologies are bending and adapting according to the new landscape. The Docker ecosystem is evolving and evolving rapidly.

The post 5 Evolving Docker Technologies appeared first on ActiveState.

More Stories By Phil Whelan

Phil Whelan has been a software developer at ActiveState since early 2012 and has been involved in many layers of the Stackato product, from the JavaScript-based web console right through to the Cloud Controller API. He has been the lead developer on kato, the command-line tool for administering Stackato. His current role at ActiveState is Technology Evangelist.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.