Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner

Related Topics: @DevOpsSummit, Open Source Cloud, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

PaaS is the Operating System By @JPMorgenthal | @DevOpsSummit [#DevOps]

In CloudFoundry and Heroku one can see many of these functions in operation across a set of virtual compute resources

Why Platform-as-a-Service is the Operating System Of The Cloud

When I took my operating systems fundamentals course in college I was taught that an operating system provides very specific capabilities that provides users with access compute resources for building and running applications. Over time as networking capabilities and bandwidth increased, the notion of a set of modules that interface between the user and the hardware has changed to incorporate concepts of distributed operating systems, network operating systems and autonomous systems. While the notion of the operating system may have changed certain attributes have remained constant:

  • scheduling processes
  • coordinating interaction among processes, interprocess communication and synchronization
  • managing system resources
  • enforcing access control and protection
  • maintaining system integrity and performing error recovery

When looking at container-based PaaS offerings, such as CloudFoundry and Heroku, one can see many of these functions in operation across a set of virtual compute resources. If we consider that Infrastructure-as-a-Service (IaaS), bare metal and virtualized hardware inclusive of traditional operating systems, such as Windows and Linux, all represent the modern day equivalent of a compute node in a cloud universe then we can take the leap that the PaaS provides the interface between the user and that node. Moreover, we can include in this list of resources the services that support application operations, such as identity management, data management, messaging and monitoring.

If we explore the role of the PaaS in cloud application development and delivery, we can see that the platform overlays a set of cloud nodes and services exposing their resources to the application runtime environment. The PaaS then handles application lifecycle management inclusive of execution, process allocation and resource scheduling, access control and protection fostering multitenancy, and error recovery. Hence, container-based PaaS meets with the criteria to be considered an operating system.

Perhaps even more interesting is that the comparisons that can be drawn with regard to application development for a single operating systems versus a cloud operating system. One constant that remains as we look across time at operating systems is the increasing level of abstraction. Each level of abstraction has afforded us the ability to focus less on resource limitations, but even a cluster of virtualized compute resources still has capacity limitations.

With the emergence of a cloud operating system, we have the opportunity to finally escape those limitations by spanning and aggregating clusters of virtualized compute resources. Moreover, provisioning of these resources are delegated to services that are designed optimally for the physical resources they manage. For example, the cloud operating system/PaaS can communicate with the cloud management systems to identify where there resources that can satisfy the need for very-high speed (I/O per second) storage. Each cloud management system can then list its resources and corresponding metrics and availability. The PaaS can then request the one that best meets the criteria for the application. That is, the PaaS is the one environment that knows enough about the performance of the application and can schedule the resources and bind them to the application.

With these types of abilities, we can start our designs for cloud applications with the perspective of plenty instead of managing to the constrained. We can start to specify service levels within our applications, which can then be interpreted by the PaaS and turned into bound resources during execution. We have appropriate division of work across an application execution supply-chain. The bare metal provides maximum movement of bytes to the physical device. The hypervisor divides those resources into dynamically allocated blocks. The virtualization clusters allow those blocks to be moved around to maximize utilization of resources. And the PaaS can communicate across clusters to select the best set of available resources to ensure optimal execution of the application for which it is responsible.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.