Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

What Is #DevOps Intelligence? | @DevOpsSummit #AI #ContinuousDelivery

DevOps intelligence gives you access to both real-time and historical information about your applications, people, environments

DevOps Intelligence Changes the Game
By Lisa Wells

One of my favorite parts of the novel The Phoenix Project is when Bill Palmer, DevOps hero and VP of IT Operations for the fictional company “Parts Unlimited” has a light bulb moment about the central importance of IT to the business.

The moment comes as the company’s CFO lays out for Bill how he strives to align the goals of his department with the goals of the business. It’s here Bill starts to realize he must take a similar approach with IT. He ultimately turns to data about his delivery process to improve IT’s effectiveness and save his team from outsourcing—and a DevOps team is born.

Okay, so real-world situations might not be as dire as the fictional drama at Parts Unlimited. Still, many IT teams that are transforming to DevOps have yet to take the next step—using “DevOps Intelligence” to make data-driven decisions that help them improve software delivery.

What Is DevOps Intelligence?
DevOps Intelligence  is all about providing the intelligence and insight companies need to deliver software more efficiently, with less risk and with better results. Making it part of your process is becoming crucial as both the demand for better software faster and the complexity of application development keep increasing. As incentive for getting started, below are seven benefits of making DevOps intelligence a top priority in 2017 and beyond.

1. Faster Release Cycles
End-to-end intelligence about your delivery pipeline lets you optimize your processes and accelerate release cycles. With the real-time, actionable information that DevOps intelligence provides, you can identify waste, such as bottlenecks in the pipeline. You can quickly find out how systems are performing with new changes, monitor the success rate of deployments, get insight into the cycle times for each team, and see which processes are working well and which are negatively impacting time to delivery.

2. Higher Quality Software
DevOps intelligence enables feedback loops, which are the foundation of iterative development. Feedback loops allow for creativity and are extremely valuable for doing things like trying out new features or changes to an interface to make sure you’re building more of what customers want. Feedback loops can become an integral part of the software development and delivery process because failures are fast, cheap, and small.

3. Increased Business Value of Software
DevOps intelligence allows you to quickly get actionable information about things like which features customers are using, which processes they’re abandoning, or whether they’re changing their behavior. DevOps intelligence can also be mined after a release to support impact analysis so you can find out whether what you’re delivering is actually of value to your customers and make smarter decisions about future offerings.

4. Greater Transparency
Insight into the entire pipeline provides end-to-end transparency. Clear, real-time visibility into the process makes it easier for you to understand why you are (or are not) your hitting goals, justify requests for additional time and resources, and make the case that readiness rather than calendar dates should drive releases. Transparency also means that non-IT stakeholders can easily track progress at any given point in the process and feel empowered to make business decisions based on real-time data without having to go through IT.

5. Addition of Proactive and Predictive Management to the Delivery Process
DevOps intelligence gives you access to both real-time and historical information about your applications, people, environments, and more. Real-time, actionable insight delivers advantages such as early warning of what might fail so you can prevent it rather than wasting time firefighting. Historical data lets you analyze trends and predict behavior based on past results.

6. Better Support of Compliance Requirements
Data collected about your processes shows not just how those processes can be optimized, but what happened when, in an auditable fashion. Were processes followed? Who did what and when? What failed? What steps were taken, by whom, when, and were they correct? DevOps intelligence helps you stay on top of your compliance requirements and fix problems that might threaten your ability to meet them.

7. Stronger DevOps Culture
Intelligence about your delivery process helps strengthen your DevOps culture by empowering people, both inside and outside of IT, to affect change and be part of efforts to improve processes and products. DevOps intelligence provides insight that shines a light on their accomplishments so they can be celebrated. The ability to share data with people across the business reinforces the fact that they have an important role in making impactful decisions that help the company.

As companies improve their DevOps maturity and implement release orchestration, they’re building the infrastructure they need to automatically capture and analyze DevOps data and turn it into actionable information. Armed with this intelligence, IT will be well-positioned to fully support the goals of the business.

The post DevOps Intelligence Changes the Game appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@DevOpsSummit Stories
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cross-functional purposes. So, when traditional IT Service Management (ITSM) meets the cloud, and equally, DevOps, there is invariably going to be conflict.