Welcome!

@DevOpsSummit Authors: Elizabeth White, Dana Gardner, Yeshim Deniz, Stefana Muller, Liz McMillan

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

What Is Containerization and Will It Spell the End for Virtualization? | @DevOpsSummit #DevOps #Containers

Containerization is disrupting the cloud, but what will be the implications for virtual machines?

What Is Containerization and Will It Spell the End for Virtualization?
By Ron Gidron

Containerization is popularly viewed as the ‘virtualization of virtualization' or ‘next generation virtualization.' However, containers have existed long before virtualization or the advent of modern container technology like Docker and Linux Containers. Similar tech was built into mainframe systems that pervaded the IT landscape for the preceding decades.

However, the implication, as the name suggests, is that modern software containerization will have the same seismic impact on the IT industry as shipping containers have had on maritime freight transport. Indeed, it is quite common now for many major online companies to run their entire infrastructure on containers.

The reason behind the analogy, which is even alluded to by Docker in their logo, is that in the same way shipping containers enabled for different products to be kept together when being transported, the software containers will enable all the different elements of an application to be bundled together and moved from one machine to another with comparative ease. In essence, they become extremely lightweight and portable.

Containerization Fundamentals
Containerization enables you to run an application in a virtual environment by storing all the files, libraries, etc., together as one package - a container.  The container can plug directly into the operating system kernel and does not requires you to create a new virtual machine every time you want a new instance of the application, or to run any other application that uses the same O/S. Keeping the entire application together means different services can efficiently share the operating system kernel.

The rise to prominence of containerization is largely attributable to the development of the open source software, Docker. While there were other container technologies previously available, Docker has brought separate workflows for Linux, Unix and Windows. The Docker engine, for example, enables the application to become usable on any machine. With the application bundled in isolation, it can easily be moved to a different machine or operating system as required.

How Is It Different from Virtual Machines?
In contrast to containerization, a virtual machine requires you to run both a hypervisor and a guest operating system. So every time you wish to fire up your application you are required to install a new operating system. This can create a number of challenges in terms of:

  • Portability - it becomes difficult to move the application to another virtual machine
  • Speed - accessibility and setup times can be significant
  • Resources - virtual machines take up significantly more space than containers

Evidently it is possible to support far more containers than virtual machines on the same level of infrastructure. By enveloping the entire application in its own operating system, a virtual machine brings a lot more overheads.

Tech sprawl also becomes an issue for virtual machines, because if the O/S is modified or updated in one place, it will need to be manually done so everywhere else. Obviously such a problem does not exist in containerization, which again saves time and money.

Is This the End of Virtualization?
No.
Virtual machines are heavily integrated into the landscape of many major enterprises and the idea of just dumping existing applications into a container is impractical. The architecture needs to be redesigned or containerization simply won't work.

However, there are several advantages to virtual machines and these go beyond the necessary support of legacy applications. Large scale organizations are extremely heterogeneous, suffering from a sprawl of technology across a number of different operating systems with different modifications. Furthermore, the virtual machines still have a role in enabling large scale data center infrastructure as they encapsulate bare metal servers.

Virtualization, and specifically the hypervisor, provide effective partitioning of the different operating systems on the server. Obviously with containerization, each server requires the same O/S, so whereas newer companies were able to foresee such problems early on, for larger established enterprises this privilege does not exist.

Ultimately containerization is very much here to stay and offers a range of benefits to adopters. The increases in speed, portability and flexibility it offers will see a reduction in the prominence of virtual machines. However, they will still have a role in the future of IT, specifically within large or technically diverse organizations.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
An edge gateway is an essential piece of infrastructure for large scale cloud-based services. In his session at 17th Cloud Expo, Mikey Cohen, Manager, Edge Gateway at Netflix, detailed the purpose, benefits and use cases for an edge gateway to provide security, traffic management and cloud cross region resiliency. He discussed how a gateway can be used to enhance continuous deployment and help testing of new service versions and get service insights and more. Philosophical and architectural approaches to what belongs in a gateway vs what should be in services were also discussed. Real examples of how gateway services are used in front of nearly all of Netflix's consumer facing traffic showed how gateway infrastructure is used in real highly available, massive scale services.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.