Welcome!

DevOps Journal Authors: Pat Romanski, Yeshim Deniz, Lori MacVittie, Elizabeth White, Liz McMillan

Related Topics: Web 2.0, Virtualization, Cloud Expo, Big Data Journal, SDN Journal, IoT Expo, DevOps Journal

Web 2.0: Article

Cloud, Internet of Things (IoT) and Big Operational Data

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services

Cloud and Things and Big Operational Data

Software-defined architectures are critical for achieving the right mix of efficiency and scale needed to meet the challenges that will come with the Internet of Things

If you've been living under a rock (or rack in the data center) you might not have noticed the explosive growth of technologies and architectures designed to address emerging challenges with scaling data centers. Whether considering the operational aspects (devops) or technical components (SDN, SDDC, Cloud), software-defined architectures are the future enabler of business, fueled by the increasing demand for applications.

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services fueled by a converging digital-physical world. Applications, whether focused on licensing, provisioning, managing or storing data for these "things" will increase the already significant burden on IT as a whole. The inability to scale from an operational perspective is really what software-defined architectures are attempting to solve by operationalizing the network to shift the burden of provisioning and management from people to technology.

But it's more than just API-enabling switches, routers, ADCs and other infrastructure components. While this is a necessary capability to ensure the operational scalability of modern data centers, what's really necessary to achieve the next "level" is collaboration.

That means infrastructure integration.

it is one thing to be able to automatically provision the network, compute and storage resources necessary to scale to meet the availability and performance expectations of users and businesses alike. But that's the last step in the process. Actually performing the provisioning is the action that's taken after it's determined not only that it's necessary, but where it's necessary.

Workloads (and I hate that term but it's at least somewhat universally understood so I'll acquiesce to using it for now) have varying characteristics with respect to the compute, network and storage they require to perform optimally. That's means provisioning a "workload" in a VM with characteristics that do not match the requirements is necessarily going to impact its performance or load capability. If one is making assumptions regarding the number of users a given application can support, and it's provisioned with a resource profile that impacts that support, it can lead to degrading performance or availability.

What that means is the systems responsible for provisioning "workloads" must be able to match resource requirements with the workload, as well as understand current (and predicted) demand in terms of users, connections and network consumption rates.

Data, is the key. Measurements of performance, rates of queries, number of users, and the resulting impact on the workload must be captured. But more than that, it must be shared with the systems responsible for provisioning and scaling the workloads.

Location Matters

This is not a new concept, that we should be able to share data across systems and services to ensure the best fit for provisioning and seamless scale demanded of modern architectures. A 2007 SIGMOD paper, "Automated and On-Demand Provisioning of Virtual Machines for Database Applications" as well as a 2010 IEEE paper, "Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center" discuss the need for such provisioning models and the resulting architectures rely heavily on the collaboration of the data center components responsible for measuring, managing and provisioning workloads in cloud computing environments through integration.

The location of a workload, you see, matters. Not location as in "on-premise" or "off-premise", though that certainly has an impact, but the location within the data center matters to the overall performance and scale of the applications composed from those workloads. The location of a specific workload comparative to other components impacts availability and traffic patterns that can result in higher incidents of north-south or east-west congestion in the network. Location of application workloads can cause hairpinning (or tromboning if you prefer) of traffic that may degrade performance or introduce variable latency that degrades the quality of video or audio content.

Location matters a great deal, and yet the very premise of cloud is to abstract topology (location) from the equation and remove it from consideration as part of the provisioning process.

Early in the life of public cloud there was concern over not knowing "who your neighbor tenant" might be on a given physical server, because there was little transparency into the decision making process that governs provisioning of instances in public cloud environments. The depth of such decisions appeared to - and still appear to - be made based on your preference for the "size" of an instance. Obviously, Amazon or Azure or Google is not going to provision a "large" instance where only a "small" will fit.

But the question of where, topologically, that "large" instance might end up residing is still unanswered. It might be two hops away or one virtual hop away. You can't know if your entire application - all its components - have been launched on the same physical server or not. And that can have dire consequences in a model that's "built to fail" because if all your eggs are in one basket and the basket breaks... well, minutes of downtime is still downtime.

The next evolutionary step in cloud (besides the emergence of much needed value added services) is more intelligent provisioning driven by better feedback loops regarding the relationship between the combination of compute, network and storage resources and the application. Big (Operational) Data is going to be as important to IT as Big (Customer) Data is to the business as more and more applications and services become critical to the business.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories from DevOps Journal
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With “smart” appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user’s habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can’t be addressed without the kinds of agile software development and infrastructure approaches pioneered by the DevOps movement.
Yahoo CIO Mike D. Kail will present a session on DevOps at the 3rd International DevOps Summit, November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Mike brings more than 23 years of IT operations experience with a focus on highly scalable architectures to Yahoo. Prior to Yahoo, he served as VP of IT Operations at Netflix. The Netflix culture highlighted the transformation we see within forward-thinking IT organizations today and its use of public cloud and ‘No Ops' is well known in the industry. Mike Kail worked to develop this culture within Netflix's own IT organization, where he focused not only on the technology, but also on hiring and training the right talent. In order to achieve the right mix of technology innovation and human talent, he concentrated on identifying the right mind set for a new way of IT (DevOps) and how to transition from IT Ops to DevOps
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
The industry is heated with debates on whether adopting private or public cloud is the smartest, best, cheapest, you name it choice. But this debate is missing the mark. Businesses shouldn’t be discussing public vs. private, but rather how can they make the two work together to their greatest advantage. The ideal is to merge on-premise and off-premise into a seamless environment that can be managed as a single entity – a forward-looking stance that will eventually see major adoption. But as of late 2013, hybrid cloud was still “rare,” noted Gartner analyst Tom Bittman. In his session at 15th Cloud Expo, Marten Mickos, CEO of Eucalyptus Systems, will discuss how public clouds need on-premise satellites to win and, conversely, how on-premise environments cannot be really powerful unless they are connected to the public cloud. It’s not two competing worlds; it’s two dimensions of the same world.
All too many discussions about DevOps conclude that the solution is an all-purpose player: developer and operations guru, complete with pager for round-the-clock duty. For most organizations that is not the way forward. In his session at DevOps Summit, Bart Copeland, President & CEO of ActiveState Software, will discuss how to achieve the agility and speed of end-to-end automation without requiring an organization stocked with Supermen and Superwomen.
The impact of DevOps in the cloud era is potentially profound. DevOps helps businesses deliver new features continuously, reduce cycle time and achieve sustained innovation by applying agile and lean principles to assist all stakeholders in an organization that develop, operate, or benefit from the business’ lifecycle. In his session at DevOps Summit, Prashanth Chandrasekar, General Manager at Rackspace, will exam whether / how companies can work with external DevOps specialists to achieve "DevOps elasticity" and DevOps expertise at scale while internally focusing on writing code / development.
In his @ThingsExpo presentation, Aaater Suleman will discuss DevOps, Linux containers, Docker in developing a complex Internet of Things application. The goal of any DevOps solution is to optimize multiple processes in an organization. And success does not necessarily require that in executing the strategy everything needs to be automated to produce an effective plan. Yet, it is important that processes are put in place to handle a necessary list of items. Docker provides a user-friendly layer on top of Linux Containers (LXCs). LXCs provide operating-system-level virtualization by limiting a process's resources. In addition to using the chroot command to change accessible directories for a given process, Docker effectively provides isolation of one group of processes from other files and system processes without the expense of running another operating system.
In his session at DevOps Summit, Andrei Yurkevich, CTO at Altoros, will provide an overview of all the benefits and opportunities, as well as drawbacks of deploying Cloud Foundry PaaS with Juju and will compare it to BOSH. Attendees will discover the features that overlap, and will learn to understand what Juju Charm is, what it is not, where you use one or the other or where you use both BOSH and Juju Charms together.
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
Achieve continuous delivery of applications by leveraging ElasticBox and Jenkins. In his session at DevOps Summit, Monish Sharma, VP of Customer Success at ElasticBox, will demonstrate how you can achieve the following using ElasticBox and the ElasticBox Jenkins Plugin: Create consistency across dev, staging, and production environments Continuous delivery across multiple clouds to handle high loads Ensure consistent policy management across environments: tagging, admin boxes, traceability Spin up machines and environments quickly Deploy applications to any cloud Enable real-time collaboration between developers and operations