Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Elizabeth White, Zakia Bouachraoui, Pat Romanski, Liz McMillan

Related Topics: @DevOpsSummit, Open Source Cloud, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

PaaS is the Operating System By @JPMorgenthal | @DevOpsSummit [#DevOps]

In CloudFoundry and Heroku one can see many of these functions in operation across a set of virtual compute resources

Why Platform-as-a-Service is the Operating System Of The Cloud

When I took my operating systems fundamentals course in college I was taught that an operating system provides very specific capabilities that provides users with access compute resources for building and running applications. Over time as networking capabilities and bandwidth increased, the notion of a set of modules that interface between the user and the hardware has changed to incorporate concepts of distributed operating systems, network operating systems and autonomous systems. While the notion of the operating system may have changed certain attributes have remained constant:

  • scheduling processes
  • coordinating interaction among processes, interprocess communication and synchronization
  • managing system resources
  • enforcing access control and protection
  • maintaining system integrity and performing error recovery

When looking at container-based PaaS offerings, such as CloudFoundry and Heroku, one can see many of these functions in operation across a set of virtual compute resources. If we consider that Infrastructure-as-a-Service (IaaS), bare metal and virtualized hardware inclusive of traditional operating systems, such as Windows and Linux, all represent the modern day equivalent of a compute node in a cloud universe then we can take the leap that the PaaS provides the interface between the user and that node. Moreover, we can include in this list of resources the services that support application operations, such as identity management, data management, messaging and monitoring.

If we explore the role of the PaaS in cloud application development and delivery, we can see that the platform overlays a set of cloud nodes and services exposing their resources to the application runtime environment. The PaaS then handles application lifecycle management inclusive of execution, process allocation and resource scheduling, access control and protection fostering multitenancy, and error recovery. Hence, container-based PaaS meets with the criteria to be considered an operating system.

Perhaps even more interesting is that the comparisons that can be drawn with regard to application development for a single operating systems versus a cloud operating system. One constant that remains as we look across time at operating systems is the increasing level of abstraction. Each level of abstraction has afforded us the ability to focus less on resource limitations, but even a cluster of virtualized compute resources still has capacity limitations.

With the emergence of a cloud operating system, we have the opportunity to finally escape those limitations by spanning and aggregating clusters of virtualized compute resources. Moreover, provisioning of these resources are delegated to services that are designed optimally for the physical resources they manage. For example, the cloud operating system/PaaS can communicate with the cloud management systems to identify where there resources that can satisfy the need for very-high speed (I/O per second) storage. Each cloud management system can then list its resources and corresponding metrics and availability. The PaaS can then request the one that best meets the criteria for the application. That is, the PaaS is the one environment that knows enough about the performance of the application and can schedule the resources and bind them to the application.

With these types of abilities, we can start our designs for cloud applications with the perspective of plenty instead of managing to the constrained. We can start to specify service levels within our applications, which can then be interpreted by the PaaS and turned into bound resources during execution. We have appropriate division of work across an application execution supply-chain. The bare metal provides maximum movement of bytes to the physical device. The hypervisor divides those resources into dynamically allocated blocks. The virtualization clusters allow those blocks to be moved around to maximize utilization of resources. And the PaaS can communicate across clusters to select the best set of available resources to ensure optimal execution of the application for which it is responsible.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.