Welcome!

DevOps Journal Authors: Trevor Parsons, Carmen Gonzalez, Roger Strukhoff, Jackie Kahle, Lori MacVittie

Related Topics: @ThingsExpo, Java, Linux, Cloud Expo, Big Data Journal, DevOps Journal

@ThingsExpo: Blog Feed Post

@ThingsExpo | Cloud, Internet of Things (#IoT) and Big Operational Data

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services

Cloud and Things and Big Operational Data

Software-defined architectures are critical for achieving the right mix of efficiency and scale needed to meet the challenges that will come with the Internet of Things

If you've been living under a rock (or rack in the data center) you might not have noticed the explosive growth of technologies and architectures designed to address emerging challenges with scaling data centers. Whether considering the operational aspects (devops) or technical components (SDN, SDDC, Cloud), software-defined architectures are the future enabler of business, fueled by the increasing demand for applications.

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services fueled by a converging digital-physical world. Applications, whether focused on licensing, provisioning, managing or storing data for these "things" will increase the already significant burden on IT as a whole. The inability to scale from an operational perspective is really what software-defined architectures are attempting to solve by operationalizing the network to shift the burden of provisioning and management from people to technology.

But it's more than just API-enabling switches, routers, ADCs and other infrastructure components. While this is a necessary capability to ensure the operational scalability of modern data centers, what's really necessary to achieve the next "level" is collaboration.

That means infrastructure integration.

it is one thing to be able to automatically provision the network, compute and storage resources necessary to scale to meet the availability and performance expectations of users and businesses alike. But that's the last step in the process. Actually performing the provisioning is the action that's taken after it's determined not only that it's necessary, but where it's necessary.

Workloads (and I hate that term but it's at least somewhat universally understood so I'll acquiesce to using it for now) have varying characteristics with respect to the compute, network and storage they require to perform optimally. That's means provisioning a "workload" in a VM with characteristics that do not match the requirements is necessarily going to impact its performance or load capability. If one is making assumptions regarding the number of users a given application can support, and it's provisioned with a resource profile that impacts that support, it can lead to degrading performance or availability.

What that means is the systems responsible for provisioning "workloads" must be able to match resource requirements with the workload, as well as understand current (and predicted) demand in terms of users, connections and network consumption rates.

Data, is the key. Measurements of performance, rates of queries, number of users, and the resulting impact on the workload must be captured. But more than that, it must be shared with the systems responsible for provisioning and scaling the workloads.

Location Matters

This is not a new concept, that we should be able to share data across systems and services to ensure the best fit for provisioning and seamless scale demanded of modern architectures. A 2007 SIGMOD paper, "Automated and On-Demand Provisioning of Virtual Machines for Database Applications" as well as a 2010 IEEE paper, "Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center" discuss the need for such provisioning models and the resulting architectures rely heavily on the collaboration of the data center components responsible for measuring, managing and provisioning workloads in cloud computing environments through integration.

The location of a workload, you see, matters. Not location as in "on-premise" or "off-premise", though that certainly has an impact, but the location within the data center matters to the overall performance and scale of the applications composed from those workloads. The location of a specific workload comparative to other components impacts availability and traffic patterns that can result in higher incidents of north-south or east-west congestion in the network. Location of application workloads can cause hairpinning (or tromboning if you prefer) of traffic that may degrade performance or introduce variable latency that degrades the quality of video or audio content.

Location matters a great deal, and yet the very premise of cloud is to abstract topology (location) from the equation and remove it from consideration as part of the provisioning process.

Early in the life of public cloud there was concern over not knowing "who your neighbor tenant" might be on a given physical server, because there was little transparency into the decision making process that governs provisioning of instances in public cloud environments. The depth of such decisions appeared to - and still appear to - be made based on your preference for the "size" of an instance. Obviously, Amazon or Azure or Google is not going to provision a "large" instance where only a "small" will fit.

But the question of where, topologically, that "large" instance might end up residing is still unanswered. It might be two hops away or one virtual hop away. You can't know if your entire application - all its components - have been launched on the same physical server or not. And that can have dire consequences in a model that's "built to fail" because if all your eggs are in one basket and the basket breaks... well, minutes of downtime is still downtime.

The next evolutionary step in cloud (besides the emergence of much needed value added services) is more intelligent provisioning driven by better feedback loops regarding the relationship between the combination of compute, network and storage resources and the application. Big (Operational) Data is going to be as important to IT as Big (Customer) Data is to the business as more and more applications and services become critical to the business.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo--to be held November 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley--will expand the DevO...
We had three quick questions for Mike Kail, and he had three quick answers. Mike is SVP of Infrastructure at Yahoo!, and formerly VP of IT Operations at Netflix. He'll be speaking at @DevOpsSummit about his experiences in integrating DevOps on a big scale in big-scale projects. Here's what we asked and what he said: DevOps Journal: You mention “eventual consistency” as a goal. Is there a timeframe? Mike Kail: It is really a strategy for successful transformation instead of a strict ...
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo--to be held November 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley--will expand the DevO...
Having just joined a large technology company with 20 years of history, it would be suicidal to believe that I can immediately move the entire organization to the DevOps mindset and model. For those not familiar with the term, “Eventual Consistency” is a model used in distributed computing to ensure high availability. In this context, it’s a model for replicating best practices and automation across IT teams and business units. The logical place to start with automation is the on-boarding of a ...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, will discuss ho...
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option through September. On site registration price of $1,95 will be set to 'free' for delegates who register during special offer. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site. ...
Despite the fact that majority of developers firmly believe that “it worked on my laptop” is a poor excuse for production failures, most don’t truly understand why it is virtually impossible to make your development environment representative of production. When asked, the primary reason for the production/development difference everyone mentions is technology stack spec/configuration differences. While it’s true, thanks to the black magic of Cloud (capitalization intended) with a bit of wizard...
SYS-CON Events announced today that AppDynamics will exhibit at DevOps Summit Silicon Valley, which will take place November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Digital businesses like yours need a way to turn data into actual results. AppDynamics is ushering in the next digital age – the age of the software-defined business. AppDynamics’ mission is to deliver true application intelligence that helps your software-defined business run faster, leaner, and more ef...

BOULDER, Colo., Sept. 24, 2014 /PRNewswire/ -- VictorOps, the leading collaboration and incident management platform for DevOps teams, is hosting a webinar that will discuss how to take full advantage of your project post-mortems with or without a template.

DevOps Journal: Cloud, Big Data, and the IoT all carry disruption within enterprise IT. The same goes with DevOps. Which of these is the major disruptor, in your opinion? Andi Mann: It may well be cloud, because it fundamentally enables all the rest. Cloud scale is why we are now considering Big Data; cloud connectivity is a key enabler of IoT; cloud agility has enabled DevOps to take hold. But in the end, the cloud is “just” a platform, while the results of DevOps speak for themselves--l...
These days, implementing automatic deployment for .NET web projects is easier than ever. Drastic improvements started in Visual Studio 2010 when basic deployment strategies and tools were incorporated into VS itself. Yet, documentation was quite poor at that time, so you had to scour the Internet to find good tutorials in blogs or conference videos. Things have been constantly improving since then; now, we have even more functionality available out-of-the-box and documentation provided in a way ...
Azul Systems Inc. (Azul), the award-winning leader in Java runtime solutions, today announced that its OpenJDK-based Zulu 8 offering is now freely available on Docker. Zulu 8 is a 100% open source, fully tested, compatibility verified, and trusted binary distribution of the OpenJDK 8 platform. Azul has also made Zulu versions compliant with earlier Java SE 7 and Java SE 6 standards available on Docker in the same format.
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company's products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers an...
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
Leading provider of Continuous Delivery and DevOps software XebiaLabs today announced enhanced integration between Puppet and XebiaLabs' XL Deploy, the deployment automation solution that supports DevOps and Continuous Delivery teams. XL Deploy in combination with Puppet means one seamless automation process to deploy your apps.
PagerDuty, the leader in operations performance management, announced the public release of its Advanced Analytics tools, which provide insights IT teams can use to improve team and system performance. Leveraging PagerDuty’s robust data on key operational metrics like incident frequency and time to respond and resolve, companies can now drive even faster incident resolution. The new capabilities further expand PagerDuty’s operations performance platform by giving managers the ability to analy...
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every appli...
SYS-CON Events announced today that Serena Software will exhibit at DevOps Summit Silicon Valley, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Serena Software supports DevOps and Continuous Delivery by providing application deployment automation and software release management solutions to replace slow and error-prone manual processes. 2,500 enterprises around the world trust Serena to help them develop and deploy better software.
Qubell, an innovator in application deployment and configuration management, empowers online companies to do what they have never been able to do before: put into consumers' hands innovative new features and services, almost as fast as they can conceive them, without sacrificing control, reliability or uptime. Qubell emerged from stealth in the summer of 2013 (see related press release) and announced that Kohl's completed its initial implementation (see press release). Founded by pioneers in ent...