@DevOpsSummit Authors: Liz McMillan, Dalibor Siroky, Pat Romanski, Elizabeth White, Stackify Blog

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Cloud Computing Reference Architectures, Models and Frameworks

Making sense of the many reference architectures, models and frameworks for cloud

Reference ‘Things’
A Reference Architecture (RA) “should” provide a blueprint or template architecture that can be reused by others wishing to adopt a similar solution. A Reference Model (RM) should explain the concepts and relationships that underlie the RA. At Everware-CBDI we then use the term Reference Framework (RF) as a container for both. Reference architectures, models and frameworks help to make sense of Cloud Computing.

Unfortunately, such formality is absent from the various reference architectures, models and frameworks that have been published for Cloud Computing; these frequently mix elements of architecture and model, and then apply one of the terms seemingly at random.

In developing the CBDI-Service Architecture and Engineering Reference Framework (SAE) in support of SOA (Service Oriented Architecture) Everware-CBDI separated out various parts as shown in figure 1. We developed a detailed RA for SOA and a RM for SOA, with particular emphasis on a rich and detailed Meta Model for SOA and a Maturity Model for SOA. We also developed a detailed process and task decomposition for SOA activities.

But the RF is easily generalized, as shown in figure 1, where the various elements could be applied to any domain, and explicit references for example to “SOA Meta Model” or “SOA Standards” etc., can be removed.

Generalized Reference Framework

Figure 1 – Generalized Reference Framework

The benefit of this approach is that elements of the framework can then be mapped to each other in different ways to support alternative perspectives such as different usage or adoption scenarios, or the viewpoint of an individual participant or organization. Whereas most of the Cloud Computing Reference architectures, models and frameworks proposed today apply to a single perspective.

Current Cloud Computing Reference Architecture, Models and Frameworks
As discussed there are many frameworks and models to choose from. It is not my intention to detail and critique them all individually. Credit must go to NIST who have already done much of that in their 2010 Survey of Cloud Architecture Reference Models.

We may classify Cloud reference models as one of two styles, either

Analysis of the these shows that they typically contain,

  • Roles – that would be better placed in the Organization section of an RF

  • Activities – which would be part of the Process Model

  • Layered Architecture – which would be part in the Reference Architecture

Used this way, the generalized RF in figure 1 becomes a useful tool to analyze proposed Cloud Computing Reference architectures, models and frameworks in terms of understanding better what they actually contain, and a basis for development of an enterprise specific framework.

Everware-CBDI recommend that is more useful to model the capabilities required for Cloud Computing rather than to list them all as activities - as that may imply processes and tasks which is not always the case. Across the industry capability modeling is rapidly becoming the de facto standard approach to business design, and it seems highly appropriate to use the technique in planning Cloud frameworks. Using this technique capabilities are separated from the processes that use them and from roles that possess them, and consequently mapped in different ways to show different scenarios. The capability model would be in the RM section of the RF and should be used extensively in disciplines such as roadmap planning, process improvement, technology planning, service management etc.

A useful source of capabilities is provided by the Cloud Computing Reference Model/Architecture in The Role of Enterprise Architecture in Federal Cloud Computing published by the American Council for Technology.

Figure 2 takes the various elements from these different architectures, models and framework and places them into a generic RF. The intention here is not to reinvent the wheel, but consolidate the elements contained across the different reference architectures, models and frameworks for Cloud Computing into a unified framework.

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Elements highlighted in green are usually covered by existing Cloud Computing reference architectures, models and frameworks. These focus primarily on the operational state of the life cycle, and the implementation and deployment architectures.

Once the various elements have been placed into their appropriate part of the RF, then you can start mapping them to suit different scenarios. For example activities in the process decomposition can be mapped against roles – either organizational roles or people roles - perhaps using RAEW, as shown in Table 1.

Table Mapping Process Activities to Roles

Table 1 – Mapping Process Activities to Roles

At a high level, table 1 may appear a bit obvious, but at a more detailed level it helps to understand where and by whom these activities will be performed in your organization, or how it might differ in specific scenarios from the proposed reference architectures mentioned so far.

In some scenarios it may be required that the cloud consumer performs certain cloud management activities not just the provider. Whilst the cloud provider may be required to provide the necessary management capabilities, both the consumer and provider perform management activities.

Hence mapping capabilities to role in table 2 is another useful exercise, understanding who provides and who uses various capabilities. Whilst the NIST, IBM and other reference architectures do show this, as mentioned earlier their view is focused primarily on the operational state, and on the mapping of capabilities required in the operational infrastructure. As table 2 shows the span of responsibility and capability is very much wider than the operational perspective!

Table Mapping Capabilities to Roles

Table 2 – Mapping Capabilities to Roles

The value of a reference framework is to provide a consistency in such aspects as terminology, deliverables and governance across an organization. To understand the totality of the task and to manage the adoption in a proactive manner rather than allowing uncontrolled experimentation. This permits sensible reuse across the whole spectrum of capabilities and avoids the necessity for each enterprise to reinvent the wheel, and to make mistakes that could have been avoided.

I recommend organizations

  • Build their own reference framework. This should be applicable to their

    1. Current and planned maturity states for cloud computing. See the Everware-CBDI research note on Cloud Computing Maturity Model

    2. Primary role(s) – as provider, consumer, broker, etc

  • Expect to customize public domain reference framework materials to suit their specific purpose

  • Consider how they will address those sections not covered by public domain reference framework materials (the pink areas in Figure 2)

  • Consider how the capability requirements change when viewed from a purely cloud consumer perspective which may be the case when there is just tactical use of public cloud, to that of more enterprise-wide usage involving private cloud, and perhaps integration of public, private, and non-cloud apps (see Service Portfolio Planning and Architecture for Cloud Services for an enterprise perspective)

Recommended Resources

More Stories By Lawrence Wilkes

Lawrence Wilkes is a consultant, author and researcher developing best practices in Service Oriented Architecture (SOA), Enterprise Architecture (EA), Application Modernization (AM), and Cloud Computing. As well as consulting to clients, Lawrence has developed education and certification programmes used by organizations and individuals the world over, as well as a knowledgebase of best practices licenced by major corporations. See the education and products pages on http://www.everware-cbdi.com

@DevOpsSummit Stories
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.