Welcome!

@DevOpsSummit Authors: Jnan Dash, Liz McMillan, Zakia Bouachraoui, Janakiram MSV, Pat Romanski

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

What Is Containerization and Will It Spell the End for Virtualization? | @DevOpsSummit #DevOps #Containers

Containerization is disrupting the cloud, but what will be the implications for virtual machines?

What Is Containerization and Will It Spell the End for Virtualization?
By Ron Gidron

Containerization is popularly viewed as the ‘virtualization of virtualization' or ‘next generation virtualization.' However, containers have existed long before virtualization or the advent of modern container technology like Docker and Linux Containers. Similar tech was built into mainframe systems that pervaded the IT landscape for the preceding decades.

However, the implication, as the name suggests, is that modern software containerization will have the same seismic impact on the IT industry as shipping containers have had on maritime freight transport. Indeed, it is quite common now for many major online companies to run their entire infrastructure on containers.

The reason behind the analogy, which is even alluded to by Docker in their logo, is that in the same way shipping containers enabled for different products to be kept together when being transported, the software containers will enable all the different elements of an application to be bundled together and moved from one machine to another with comparative ease. In essence, they become extremely lightweight and portable.

Containerization Fundamentals
Containerization enables you to run an application in a virtual environment by storing all the files, libraries, etc., together as one package - a container.  The container can plug directly into the operating system kernel and does not requires you to create a new virtual machine every time you want a new instance of the application, or to run any other application that uses the same O/S. Keeping the entire application together means different services can efficiently share the operating system kernel.

The rise to prominence of containerization is largely attributable to the development of the open source software, Docker. While there were other container technologies previously available, Docker has brought separate workflows for Linux, Unix and Windows. The Docker engine, for example, enables the application to become usable on any machine. With the application bundled in isolation, it can easily be moved to a different machine or operating system as required.

How Is It Different from Virtual Machines?
In contrast to containerization, a virtual machine requires you to run both a hypervisor and a guest operating system. So every time you wish to fire up your application you are required to install a new operating system. This can create a number of challenges in terms of:

  • Portability - it becomes difficult to move the application to another virtual machine
  • Speed - accessibility and setup times can be significant
  • Resources - virtual machines take up significantly more space than containers

Evidently it is possible to support far more containers than virtual machines on the same level of infrastructure. By enveloping the entire application in its own operating system, a virtual machine brings a lot more overheads.

Tech sprawl also becomes an issue for virtual machines, because if the O/S is modified or updated in one place, it will need to be manually done so everywhere else. Obviously such a problem does not exist in containerization, which again saves time and money.

Is This the End of Virtualization?
No.
Virtual machines are heavily integrated into the landscape of many major enterprises and the idea of just dumping existing applications into a container is impractical. The architecture needs to be redesigned or containerization simply won't work.

However, there are several advantages to virtual machines and these go beyond the necessary support of legacy applications. Large scale organizations are extremely heterogeneous, suffering from a sprawl of technology across a number of different operating systems with different modifications. Furthermore, the virtual machines still have a role in enabling large scale data center infrastructure as they encapsulate bare metal servers.

Virtualization, and specifically the hypervisor, provide effective partitioning of the different operating systems on the server. Obviously with containerization, each server requires the same O/S, so whereas newer companies were able to foresee such problems early on, for larger established enterprises this privilege does not exist.

Ultimately containerization is very much here to stay and offers a range of benefits to adopters. The increases in speed, portability and flexibility it offers will see a reduction in the prominence of virtual machines. However, they will still have a role in the future of IT, specifically within large or technically diverse organizations.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events that follow certain rules. Functions are written in a fixed set of languages, with a fixed set of programming models and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-na...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...
To enable their developers, ensure SLAs and increase IT efficiency, Enterprise IT is moving towards a unified, centralized approach for managing their hybrid infrastructure. As if the journey to the cloud - private and public - was not difficult enough, the need to support modern technologies such as Containers and Serverless applications further complicates matters. This talk covers key patterns and lessons learned from large organizations for architecting your hybrid cloud in a way that: Supports self-service, "public cloud" experience for your developers that's consistent across any infrastructure. Gives Ops peace of mind with automated management of DR, scaling, provisioning, deployments, etc.
xMatters helps enterprises prevent, manage and resolve IT incidents. xMatters industry-leading Service Availability platform prevents IT issues from becoming big business problems. Large enterprises, small workgroups, and innovative DevOps teams rely on its proactive issue resolution service to maintain operational visibility and control in today's highly-fragmented IT environment. xMatters provides toolchain integrations to hundreds of IT management, security and DevOps tools. xMatters is the primary Service Availability platform trusted by leading global companies and innovative challengers including BMC Software, Credit Suisse, Danske Bank, DXC technology, Experian, Intuit, NVIDIA, Sony Network Interactive, ViaSat and Vodafone. xMatters is headquartered in San Ramon, California and has offices worldwide.