Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Dalibor Siroky, Tim Hinds, Dana Gardner

Blog Feed Post

Be the Cloud You Want to Use

#devops #linerate #cloud

chevy-chase-caddyshack"Stop thinking...let things happen...and be...the ball."

Some of you will remember Caddyshack - that campy 80s movie in which Chevy Chase often offered crazy advice, like "be the ball." For those of you who don't (because you're too young or were living under a rock), the premise of this particular piece of advice was along the lines of Obi-Wan's "Use the force, Luke" encouragement. To put it more simply (and without a metaphor) become a part of that which is important to what you're trying to achieve. Understand your goals from the perspective of the thing that achieves it.

In the case of many operator today that means you must "be the cloud you want to use."

The reality is that shadow IT (or whatever you want to call it) exists because current infrastructure is not as flexible, easy to use or as fast to provision as that in the cloud. Neither does existing licensing and billing for on-premise solutions meet the current, just-in-time attitude associated with cloud and agile business models.

Ultimately, what DevOps seeks to achieve is accelerated application deployment. That's why cloud in all its forms - public, private and hybrid - continue to gain mindshare, traction and customers.

The public cloud model serves as the archetypal rapid deployment model. With APIs that encourage automation and orchestration and a billing model that favors the transient nature of demand for resources, the cloud model presents the penultimate experience for operators and developers with respect to getting applications to market. Fast.

Seamless Transitions

But notice it's only the penultimate experience. That's because there are still aspects of cloud which frustrate both business and operations alike after deployment. Concerns with performance and migration between environments continues to give many enterprises enough pause that while they are embracing public cloud, it's a tentative embrace based more on the understanding that the relationship is born out of necessity, not necessarily love. It's an arranged marriage with political and financial benefits, not the product of true love that simply cannot be kept apart.

Because of this, hybrid cloud is inevitable. There will be applications deployed in both public and on-premise models, and eventually the expectation of a seamless transition between the two will become more than expectation - it will become a demand. A requirement. A must have.

That's more problematic for operations (and thus DevOps) than it is app dev. Virtualization, containers, and other technology has enabled a much simpler path forward (and outward) for applications than for its critical infrastructure. And while for some the simplest answer is to simply virtualize that infrastructure, too, there remain obstacles peculiar to the network that prevents Occam's Razor from cutting through that technical red tape.

Thus, it remains critical for infrastructure of all kinds to adjust to better fit within this abstracted, API-driven, software-defined world of cloud - both on and off-premise.  APIs must be provided. It must be cloud-ready. Billing models require adjustment. And rapid provisioning will be enabled, or else.

It is these requirements that are necessary to enable operations to be the cloud they want to use. Modern APIs enable automation through scripting and integration with cloud management platforms (OpenStack, VMware, CA).  New acquisition, licensing and billing models enable the "sometimes on, sometimes off" usage patterns common to many existing (maintenance windows still exist in the enterprise, after all) and new (adoption is always erratic in the beginning) applications.

Thus it's important for critical infrastructure - the pieces of "the network" that are considered imperative to an application deployment - are able to be deployed both on-premise and in the cloud with equal alacrity and, one hopes, with the same capabilities.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they responded to were some very unique security capabilities that we have," explained Mark Figley, Director of LinuxONE Offerings at IBM, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.