Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Open Source Cloud, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

PaaS is the Operating System By @JPMorgenthal | @DevOpsSummit [#DevOps]

In CloudFoundry and Heroku one can see many of these functions in operation across a set of virtual compute resources

Why Platform-as-a-Service is the Operating System Of The Cloud

When I took my operating systems fundamentals course in college I was taught that an operating system provides very specific capabilities that provides users with access compute resources for building and running applications. Over time as networking capabilities and bandwidth increased, the notion of a set of modules that interface between the user and the hardware has changed to incorporate concepts of distributed operating systems, network operating systems and autonomous systems. While the notion of the operating system may have changed certain attributes have remained constant:

  • scheduling processes
  • coordinating interaction among processes, interprocess communication and synchronization
  • managing system resources
  • enforcing access control and protection
  • maintaining system integrity and performing error recovery

When looking at container-based PaaS offerings, such as CloudFoundry and Heroku, one can see many of these functions in operation across a set of virtual compute resources. If we consider that Infrastructure-as-a-Service (IaaS), bare metal and virtualized hardware inclusive of traditional operating systems, such as Windows and Linux, all represent the modern day equivalent of a compute node in a cloud universe then we can take the leap that the PaaS provides the interface between the user and that node. Moreover, we can include in this list of resources the services that support application operations, such as identity management, data management, messaging and monitoring.

If we explore the role of the PaaS in cloud application development and delivery, we can see that the platform overlays a set of cloud nodes and services exposing their resources to the application runtime environment. The PaaS then handles application lifecycle management inclusive of execution, process allocation and resource scheduling, access control and protection fostering multitenancy, and error recovery. Hence, container-based PaaS meets with the criteria to be considered an operating system.

Perhaps even more interesting is that the comparisons that can be drawn with regard to application development for a single operating systems versus a cloud operating system. One constant that remains as we look across time at operating systems is the increasing level of abstraction. Each level of abstraction has afforded us the ability to focus less on resource limitations, but even a cluster of virtualized compute resources still has capacity limitations.

With the emergence of a cloud operating system, we have the opportunity to finally escape those limitations by spanning and aggregating clusters of virtualized compute resources. Moreover, provisioning of these resources are delegated to services that are designed optimally for the physical resources they manage. For example, the cloud operating system/PaaS can communicate with the cloud management systems to identify where there resources that can satisfy the need for very-high speed (I/O per second) storage. Each cloud management system can then list its resources and corresponding metrics and availability. The PaaS can then request the one that best meets the criteria for the application. That is, the PaaS is the one environment that knows enough about the performance of the application and can schedule the resources and bind them to the application.

With these types of abilities, we can start our designs for cloud applications with the perspective of plenty instead of managing to the constrained. We can start to specify service levels within our applications, which can then be interpreted by the PaaS and turned into bound resources during execution. We have appropriate division of work across an application execution supply-chain. The bare metal provides maximum movement of bytes to the physical device. The hypervisor divides those resources into dynamically allocated blocks. The virtualization clusters allow those blocks to be moved around to maximize utilization of resources. And the PaaS can communicate across clusters to select the best set of available resources to ensure optimal execution of the application for which it is responsible.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.