@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, @CloudExpo

@DevOpsSummit: Article

Redefining the Data Center By @Automic | @CloudExpo [#Cloud]

Why infrastructure-as-a-service (IaaS) technology is having such a seismic impact on the data center model

Infrastructure Service Clouds: Redefining the Data Center
By Michael Schmidt

Infrastructure service cloud technology is proving as popular as flowers in the Spring. Recent research reveals that the Global Infrastructure as a Service (IaaS) Market will post a compound annual growth rate (CAGR) of 42.9% from 2015-2019. Indeed, many believe the shift induced by IaaS is so seismic it can be compared to the introduction of an independent power grid, which led to the detachment of power production from its location of use and replaced co-located power generators with central power plants.

So what's behind this demand, why is it so disruptive and what will its impact be?

To see that far forward, we need to look back to the start. First, a quick history lesson on IaaS. It is a cloud computing technology, which extends the concepts of hardware virtualization-technologies used to detach compute, storage and network resources from their underlying physical hardware.

IaaS complements hardware virtualization for storage, network and compute resources with intelligent pooling and allocation mechanisms to make these resources available anywhere, anytime, at almost any scale. Amazon was at the IaaS vanguard with their Amazon Web Services solution, quickly followed the open source OpenStack service, and IBM, HP and others following behind with their private cloud stacks.

Why IaaS is proving so popular
What's behind this predicted 42% growth in IaaS over the next four years? Businesses understand that IaaS has two major advantages over legacy data center technologies: optimized resource utilization and on-demand capacity.

These two characteristics build upon the significantly increased elasticity of compute power, storage capacity and network bandwidth enabled through the cloud technology. The consumption of resources is not bound to a specific application or customer, which means resources can be spread amongst a bigger pool of customers or applications, balancing demand over time across this pool.

Compare that to a traditional, non-IaaS data center environment. Here, dedicated server hardware is heavily under-utilized, because each specific server has to be sized according to the maximum power requirements for its specific application, which often lies well above the average. Storage suffers the same-and equally expensive-problem.

Moreover, IaaS enables the rapid ramp-up of new capacity whenever planned or unplanned demand calls for it. In the traditional non-IaaS environment, it took days-sometimes weeks-to satisfy the sudden spikes in demand, with hardware needing to be procured, installed and commissioned.  With IaaS, any call for greater capacity can be answered in minutes, with minimal resource intervention too.

All of this makes IaaS an attractive proposition for businesses with dynamic demands on their workloads.

Why IaaS is a disruptive technology
New technologies come and go frequently-think DAT tape, BluRay and the Apple Newton-but IaaS shows many characteristics of being a lasting, disruptive technology. You don't have to look far to understand why.

Like many disruptive technologies, IaaS is cheaper and easier to use. As we've seen, capacity is elastic, and the complexity is masked from the everyday users. IaaS also lacks features valued by the most profitable customers in the industry. For example:

  • Platform support is typically limited to certain versions of Windows and Linux. UNIX derivatives and mainframes-important system platforms for many enterprise customers with legacy applications-are not available in infrastructure service clouds.
  • It is generally not possible to point to a physical location for processing and data storage. This is a severe compliance and security issue for many organizations and a barrier to adoption.

For these reasons, IaaS adoption began with fringe customers and fringe use cases-frequently in small and medium-sized businesses. Meanwhile, the enterprise IT market for IaaS is probably still in an early adopter phase, with the 42.9% CAGR highlighted earlier now kicking in.

However, it is still not clear if this transformation will happen mainly in the form of infrastructure service cloud providers or via the different path of ‘private cloud adoption'. In this scenario, businesses embrace IaaS through their own infrastructure investments (via virtualization and pooling), covering peak loads with infrastructure cloud services purchased externally (referred to as the ‘hybrid cloud').

Whichever course it takes, the potential of IaaS to shake up the market and cause a seismic shift in the data center industry is clearly vivid.

In my follow-up blog, I'll explore the impact of IaaS on outsourcers, software vendors and other stakeholders in the industry value chain. I'll also look at the impact of emerging IaaS trends.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.