@DevOpsSummit Authors: Carmen Gonzalez, Zakia Bouachraoui, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Cloud Computing Reference Architectures, Models and Frameworks

Making sense of the many reference architectures, models and frameworks for cloud

Reference ‘Things’
A Reference Architecture (RA) “should” provide a blueprint or template architecture that can be reused by others wishing to adopt a similar solution. A Reference Model (RM) should explain the concepts and relationships that underlie the RA. At Everware-CBDI we then use the term Reference Framework (RF) as a container for both. Reference architectures, models and frameworks help to make sense of Cloud Computing.

Unfortunately, such formality is absent from the various reference architectures, models and frameworks that have been published for Cloud Computing; these frequently mix elements of architecture and model, and then apply one of the terms seemingly at random.

In developing the CBDI-Service Architecture and Engineering Reference Framework (SAE) in support of SOA (Service Oriented Architecture) Everware-CBDI separated out various parts as shown in figure 1. We developed a detailed RA for SOA and a RM for SOA, with particular emphasis on a rich and detailed Meta Model for SOA and a Maturity Model for SOA. We also developed a detailed process and task decomposition for SOA activities.

But the RF is easily generalized, as shown in figure 1, where the various elements could be applied to any domain, and explicit references for example to “SOA Meta Model” or “SOA Standards” etc., can be removed.

Generalized Reference Framework

Figure 1 – Generalized Reference Framework

The benefit of this approach is that elements of the framework can then be mapped to each other in different ways to support alternative perspectives such as different usage or adoption scenarios, or the viewpoint of an individual participant or organization. Whereas most of the Cloud Computing Reference architectures, models and frameworks proposed today apply to a single perspective.

Current Cloud Computing Reference Architecture, Models and Frameworks
As discussed there are many frameworks and models to choose from. It is not my intention to detail and critique them all individually. Credit must go to NIST who have already done much of that in their 2010 Survey of Cloud Architecture Reference Models.

We may classify Cloud reference models as one of two styles, either

Analysis of the these shows that they typically contain,

  • Roles – that would be better placed in the Organization section of an RF

  • Activities – which would be part of the Process Model

  • Layered Architecture – which would be part in the Reference Architecture

Used this way, the generalized RF in figure 1 becomes a useful tool to analyze proposed Cloud Computing Reference architectures, models and frameworks in terms of understanding better what they actually contain, and a basis for development of an enterprise specific framework.

Everware-CBDI recommend that is more useful to model the capabilities required for Cloud Computing rather than to list them all as activities - as that may imply processes and tasks which is not always the case. Across the industry capability modeling is rapidly becoming the de facto standard approach to business design, and it seems highly appropriate to use the technique in planning Cloud frameworks. Using this technique capabilities are separated from the processes that use them and from roles that possess them, and consequently mapped in different ways to show different scenarios. The capability model would be in the RM section of the RF and should be used extensively in disciplines such as roadmap planning, process improvement, technology planning, service management etc.

A useful source of capabilities is provided by the Cloud Computing Reference Model/Architecture in The Role of Enterprise Architecture in Federal Cloud Computing published by the American Council for Technology.

Figure 2 takes the various elements from these different architectures, models and framework and places them into a generic RF. The intention here is not to reinvent the wheel, but consolidate the elements contained across the different reference architectures, models and frameworks for Cloud Computing into a unified framework.

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Elements highlighted in green are usually covered by existing Cloud Computing reference architectures, models and frameworks. These focus primarily on the operational state of the life cycle, and the implementation and deployment architectures.

Once the various elements have been placed into their appropriate part of the RF, then you can start mapping them to suit different scenarios. For example activities in the process decomposition can be mapped against roles – either organizational roles or people roles - perhaps using RAEW, as shown in Table 1.

Table Mapping Process Activities to Roles

Table 1 – Mapping Process Activities to Roles

At a high level, table 1 may appear a bit obvious, but at a more detailed level it helps to understand where and by whom these activities will be performed in your organization, or how it might differ in specific scenarios from the proposed reference architectures mentioned so far.

In some scenarios it may be required that the cloud consumer performs certain cloud management activities not just the provider. Whilst the cloud provider may be required to provide the necessary management capabilities, both the consumer and provider perform management activities.

Hence mapping capabilities to role in table 2 is another useful exercise, understanding who provides and who uses various capabilities. Whilst the NIST, IBM and other reference architectures do show this, as mentioned earlier their view is focused primarily on the operational state, and on the mapping of capabilities required in the operational infrastructure. As table 2 shows the span of responsibility and capability is very much wider than the operational perspective!

Table Mapping Capabilities to Roles

Table 2 – Mapping Capabilities to Roles

The value of a reference framework is to provide a consistency in such aspects as terminology, deliverables and governance across an organization. To understand the totality of the task and to manage the adoption in a proactive manner rather than allowing uncontrolled experimentation. This permits sensible reuse across the whole spectrum of capabilities and avoids the necessity for each enterprise to reinvent the wheel, and to make mistakes that could have been avoided.

I recommend organizations

  • Build their own reference framework. This should be applicable to their

    1. Current and planned maturity states for cloud computing. See the Everware-CBDI research note on Cloud Computing Maturity Model

    2. Primary role(s) – as provider, consumer, broker, etc

  • Expect to customize public domain reference framework materials to suit their specific purpose

  • Consider how they will address those sections not covered by public domain reference framework materials (the pink areas in Figure 2)

  • Consider how the capability requirements change when viewed from a purely cloud consumer perspective which may be the case when there is just tactical use of public cloud, to that of more enterprise-wide usage involving private cloud, and perhaps integration of public, private, and non-cloud apps (see Service Portfolio Planning and Architecture for Cloud Services for an enterprise perspective)

Recommended Resources

More Stories By Lawrence Wilkes

Lawrence Wilkes is a consultant, author and researcher developing best practices in Service Oriented Architecture (SOA), Enterprise Architecture (EA), Application Modernization (AM), and Cloud Computing. As well as consulting to clients, Lawrence has developed education and certification programmes used by organizations and individuals the world over, as well as a knowledgebase of best practices licenced by major corporations. See the education and products pages on http://www.everware-cbdi.com

@DevOpsSummit Stories
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators. If you are running infrastructure, this talk will change how you think about your job in profound ways.
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundation of software-defined infrastructure, Nutanix has rapidly expanded into full application lifecycle management across any infrastructure or cloud .Join us as we delve into how the Nutanix Developer Stack makes it easy to build hybrid cloud applications by weaving DBaaS, micro segmentation, event driven lifecycle operations, and both financial and cloud governance together into a single unified st...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.