Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Carmen Gonzalez, Dana Gardner

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Application-Driven vs Feature-Driven Orchestration

One of the challenges in scaling modern data centers rises directly from an increase in network complexity

One of the challenges in scaling modern data centers rises directly from an increase in network complexity over the past few years. We can argue why complexity has increased, but it's reasonable to say that scaling data centers means more boxes - more servers, more network gear, more middle boxes - and every device (or service) you add increases the complexity of the topology and thus the operational overhead to manage it. Organizations agree - things are somewhat or substantially getting more complex.

changes in ntework complexity

Software-defined architectures attempt to answer this challenge (among several others) by operationalizing the network. By using APIs to orchestrate provisioning processes and enable the integration necessary to make use of actionable monitoring data generated by various systems across the data center, software-defined architectures accelerate application deployments and reduce risk by eliminating a source of error - manual configuration.

Now, you might think that's where it all ends. But it doesn't. Because the way in which an API is presented and used to enable automation and orchestration can actually introduce the very same complexity that it attempts to address in the first place.

There are basically two ways to approach provisioning and orchestration: application-driven or feature-driven.

Feature-Driven Orchestration

Feature-driven orchestration is so named because the granularity of the API is, basically, at a feature (or capability) level. What that means is that the API exposes individual configuration options and automation systems must invoke each one (often in the right order) to achieve the desired result.

Something like a simple load balancing service is simple only from the perspective of execution, not configuration. A load balancing service requires a virtual IP address (the end point to which clients connect), a pool of resources (each with their own IP addresses and potentially VLAN membership), an algorithm and any associated thresholds and metrics that may be required and health monitors to ensure compliance with availability and performance expectations.

You can imagine that, if the number of applications being load balanced by this service is large enough, that the number of repetitive steps required to configure the service will become as unwieldy as a manual configuration.

feature-driven-integrationThe same is true of other application services typically provided by the network, such as those concerned with performance, security and access. Each has a unique set of "steps" that must be performed in the right order to provision a service.

Feature-driven orchestration requires the provisioning engine (or orchestration system) to drive each and every step. That adds complexity to an already complex process, because you really are just tossing a thin veneer of "automation" over an existing method of configuration. Feature-driven orchestration is pretty much manual configuration (line by line) driven by a script. Instead of worrying about fat-fingering a parameter, now you have to worry about catching fifteen or twenty different exceptions and status results and handling them properly from a script.

Application-Driven Orchestration

Application-driven orchestration, on the other hand, takes advantage of constructs like service templates and policies to enable a less complex method of integration with provisioning and orchestration systems.

Rather than focus on encapsulating commands into API calls as is the case with feature-driven orchestration, application-driven orchestration focuses on aggregating only the data necessary to execute a provisioning workflow. This data is encoded in a policy or template and handed over to the service to be acted upon. The service takes the policy or template and manages the provisioning process internally, ensuring that the expected order of operations is followed and eliminating the need for operators to handle exceptions and corner cases and special status codes themselves.

Application-driven orchestration offers a safer and more efficient approach to provisioning.

application-driven-provisioning

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

An application-driven orchestration approach not only accelerates application deployment and maintains a lower risk profile but it also enables application migration across disparate environments.

Configuring a service in one environment, driven by a specific provisioning or orchestration engine, is a very specific task. Moving the application and the service to, say,a cloud environment would mean duplicating that same effort again with another provisioning or orchestration engine.

An application-driven approach that leverages templates and policies, on the other hand, can make it possible to migrate an application without incurring the cost and time associated with the repetitive integration work required by feature-driven orchestration. The policy or template can migrate with the application and easily be used to provision the same services - with the same characteristics - in the cloud environment, without incurring a whole lot of time or effort.

APis are a good thing. They're a key enabler of software-defined architectures like SDDC, cloud and SDN. But API-enabling infrastructure doesn't necessarily mean only on a checkbox and radio-button basis. That can be valuable but it can also lead to integration efforts that are just as complex (or more so) than their manual counterparts. A template or policy-based (application-driven) approach  coupled with an API through which to deliver and execute such constructs results in a much cleaner, more consistent and stable means of integrating provisioning processes into the greater software-defined architecture.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to monitor and adjust functions like performance, capacity, caching, security, optimization, uptime and service levels; identify trends or patterns to forecast future requirements; detect problems before they result in failures or downtime; and convert insight into actions like changing policies, storage tiers, or DR strategies.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development process, accelerate application delivery times, and ensure that developers will become heroes (not bottlenecks) in the IoT revolution.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no idea how to get a proper answer.
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @DevOpsSummit 19th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will explore this emerging use of Big Data generated by the digital business to complete the DevOps feedback loop, and inform operational and application decisions.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud using both containers and VMs.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of SolidFire, discussed how to leverage this concept to seize on the creativity and business agility to make it real.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and on the other side, organizations that will find themselves as roadkill on the technology highway.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of the Progress Corticon and Rollbase businesses, discussed and provided a deep understanding of the low-code application platforms that address these concerns.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" programs available to startups and innovators.
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how Docker and Kubernetes reduce software delivery cycle times, drive automation, and increase efficiency How other organizations are using DevOps + containers and how to replicate their success
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to deliver.
"We are a modern development application platform and we have a suite of products that allow you to application release automation, we do version control, and we do application life cycle management," explained Flint Brenton, CEO of CollabNet, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MySQL cluster as a Kubernetes application.
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organizations must focus on what is most relevant to deliver value, reduce IT complexity, create more repeatable agile-based processes and leverage increasingly secure and stable, cloud-based infrastructure platforms.
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how developers and operators work together to streamline cohesive systems.