Welcome!

DevOps Journal Authors: Elizabeth White, Carmen Gonzalez, Ivan Antsipau, Liz McMillan, Noel Wurst

Related Topics: IoT Expo, Java, Linux, Cloud Expo, Big Data Journal, DevOps Journal

IoT Expo: Blog Feed Post

@ThingsExpo | Cloud, Internet of Things (#IoT) and Big Operational Data

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services

Cloud and Things and Big Operational Data

Software-defined architectures are critical for achieving the right mix of efficiency and scale needed to meet the challenges that will come with the Internet of Things

If you've been living under a rock (or rack in the data center) you might not have noticed the explosive growth of technologies and architectures designed to address emerging challenges with scaling data centers. Whether considering the operational aspects (devops) or technical components (SDN, SDDC, Cloud), software-defined architectures are the future enabler of business, fueled by the increasing demand for applications.

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services fueled by a converging digital-physical world. Applications, whether focused on licensing, provisioning, managing or storing data for these "things" will increase the already significant burden on IT as a whole. The inability to scale from an operational perspective is really what software-defined architectures are attempting to solve by operationalizing the network to shift the burden of provisioning and management from people to technology.

But it's more than just API-enabling switches, routers, ADCs and other infrastructure components. While this is a necessary capability to ensure the operational scalability of modern data centers, what's really necessary to achieve the next "level" is collaboration.

That means infrastructure integration.

it is one thing to be able to automatically provision the network, compute and storage resources necessary to scale to meet the availability and performance expectations of users and businesses alike. But that's the last step in the process. Actually performing the provisioning is the action that's taken after it's determined not only that it's necessary, but where it's necessary.

Workloads (and I hate that term but it's at least somewhat universally understood so I'll acquiesce to using it for now) have varying characteristics with respect to the compute, network and storage they require to perform optimally. That's means provisioning a "workload" in a VM with characteristics that do not match the requirements is necessarily going to impact its performance or load capability. If one is making assumptions regarding the number of users a given application can support, and it's provisioned with a resource profile that impacts that support, it can lead to degrading performance or availability.

What that means is the systems responsible for provisioning "workloads" must be able to match resource requirements with the workload, as well as understand current (and predicted) demand in terms of users, connections and network consumption rates.

Data, is the key. Measurements of performance, rates of queries, number of users, and the resulting impact on the workload must be captured. But more than that, it must be shared with the systems responsible for provisioning and scaling the workloads.

Location Matters

This is not a new concept, that we should be able to share data across systems and services to ensure the best fit for provisioning and seamless scale demanded of modern architectures. A 2007 SIGMOD paper, "Automated and On-Demand Provisioning of Virtual Machines for Database Applications" as well as a 2010 IEEE paper, "Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center" discuss the need for such provisioning models and the resulting architectures rely heavily on the collaboration of the data center components responsible for measuring, managing and provisioning workloads in cloud computing environments through integration.

The location of a workload, you see, matters. Not location as in "on-premise" or "off-premise", though that certainly has an impact, but the location within the data center matters to the overall performance and scale of the applications composed from those workloads. The location of a specific workload comparative to other components impacts availability and traffic patterns that can result in higher incidents of north-south or east-west congestion in the network. Location of application workloads can cause hairpinning (or tromboning if you prefer) of traffic that may degrade performance or introduce variable latency that degrades the quality of video or audio content.

Location matters a great deal, and yet the very premise of cloud is to abstract topology (location) from the equation and remove it from consideration as part of the provisioning process.

Early in the life of public cloud there was concern over not knowing "who your neighbor tenant" might be on a given physical server, because there was little transparency into the decision making process that governs provisioning of instances in public cloud environments. The depth of such decisions appeared to - and still appear to - be made based on your preference for the "size" of an instance. Obviously, Amazon or Azure or Google is not going to provision a "large" instance where only a "small" will fit.

But the question of where, topologically, that "large" instance might end up residing is still unanswered. It might be two hops away or one virtual hop away. You can't know if your entire application - all its components - have been launched on the same physical server or not. And that can have dire consequences in a model that's "built to fail" because if all your eggs are in one basket and the basket breaks... well, minutes of downtime is still downtime.

The next evolutionary step in cloud (besides the emergence of much needed value added services) is more intelligent provisioning driven by better feedback loops regarding the relationship between the combination of compute, network and storage resources and the application. Big (Operational) Data is going to be as important to IT as Big (Customer) Data is to the business as more and more applications and services become critical to the business.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories from DevOps Journal
Having just joined a large technology company with 20 years of history, it would be suicidal to believe that I can immediately move the entire organization to the DevOps mindset and model. For those not familiar with the term, “Eventual Consistency” is a model used in distributed computing to ensure high availability. In this context, it’s a model for replicating best practices and automation across IT teams and business units. The logical place to start with automation is the on-boarding of a new employee. That process should be as seamless and streamlined as possible, with a pristine source of truth. The goal is to populate a list of attributes and replicate them out to the various systems, and that’s applicable to either a new employee or an existing one who changes roles. Core infrastructure deployment is also at the base of the DevOps stack. Automate the provisioning of compute, network, and storage, and provide continuous insight into the utilization.
Azul Systems Inc. (Azul), the award-winning leader in Java runtime solutions, today announced that its OpenJDK-based Zulu 8 offering is now freely available on Docker. Zulu 8 is a 100% open source, fully tested, compatibility verified, and trusted binary distribution of the OpenJDK 8 platform. Azul has also made Zulu versions compliant with earlier Java SE 7 and Java SE 6 standards available on Docker in the same format.
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo--to be held November 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley--will expand the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike. Recent research has shown that DevOps dramatically reduces development time, the amount of enterprise IT professionals put out fires, and support time generally. Time spent on infrastructure development is significantly increased, and DevOps practitioners report more software releases and higher quality.
These days, implementing automatic deployment for .NET web projects is easier than ever. Drastic improvements started in Visual Studio 2010 when basic deployment strategies and tools were incorporated into VS itself. Yet, documentation was quite poor at that time, so you had to scour the Internet to find good tutorials in blogs or conference videos. Things have been constantly improving since then; now, we have even more functionality available out-of-the-box and documentation provided in a way that allows you to get started from zero understanding of the process to working deployment in less than two days. Some good places to start are this tutorial for an end-to-end guide on configuration and MSDN’s overview.
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company's products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers and partners worldwide, across many industries including telecommunications, aerospace, software, financial services and CPG. ActiveState is proven for the enterprise: More than two million developers and 97% of Fortune 1000 companies use ActiveState's solutions to develop, distribute, and manage their software applications. Global customers like Bank of America, CA, Cisco, HP, Lockheed Martin and Siemens rely on ActiveState for faster development, ensuring IT governance and compliance, and accelerating time-to-market.
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
Leading provider of Continuous Delivery and DevOps software XebiaLabs today announced enhanced integration between Puppet and XebiaLabs' XL Deploy, the deployment automation solution that supports DevOps and Continuous Delivery teams. XL Deploy in combination with Puppet means one seamless automation process to deploy your apps.
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, will discuss how IT organizations can automate just-in-time assembly of application environments – each built for a specific purpose with the right infrastructure, components, service, data and tools – and deliver this automation to developers as a self-service. Victoria’s keynote will include remarks by Kira Makagon, EVP of Innovation at RingCentral, and Ratnakar Lavu, EVP of Digital Technology at Kohl’s.
PagerDuty, the leader in operations performance management, announced the public release of its Advanced Analytics tools, which provide insights IT teams can use to improve team and system performance. Leveraging PagerDuty’s robust data on key operational metrics like incident frequency and time to respond and resolve, companies can now drive even faster incident resolution. The new capabilities further expand PagerDuty’s operations performance platform by giving managers the ability to analyze and improve key drivers of uptime.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.