Click here to close now.


@DevOpsSummit Authors: PagerDuty Blog, Jason Bloomberg, Carmen Gonzalez, Ian Khan, Liz McMillan

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Post

DevOps Is Not a Black and White Endeavor

Continuous Integration or Improvement?

July 31, 2014

Continuous Integration or Continuous Improvement?

One funny thing about DevOps is that it is often touted that constant, on-the-fly changes are the way of the future in operations, and DevOps enables those changes. While this sounds really good, and some organizations are actually doing this type of DevOps, I think it is time that, for the enterprise, we strongly question that premise.

While it is really very cool to think about moving an entire web server from a farm to the cloud with just a script, upgrading a system while it’s hot, or spinning up more instances of a server without having to configure anything, I propose that, for the average enterprise, it is simply not necessary.

I’m working on a test automation project that is being implemented for a generally available library. In this case, test automation gives standardized testing with standardized reports for those who are going to use the library to review before implementing or upgrading. This makes perfect sense, but the need for this level of effort and maintenance (remember that nearly all test systems are code too) for a library you’ve developed internally for use amongst your applications becomes much less clear. While testing should be mandatory for such a library, the question becomes what the coverage needs to be. If 80% of your applications utilize 10% of the library, I think I have an automated test coverage plan for you that will balance the cost of implementation with the benefits of having the tests run as part of the build effort.


The same is true with moving web servers around. Let’s face it, in day-to-day operations most enterprises just don’t do this. Really don’t. So having automation scripts to move a web server because you did it once may not be the best use of your time. If you are one of the organizations that perpetually moves things around, then yes, this is a solid solution for you. But if hardware replacement cycles are the most likely determinant for when you will next need to spin up a whole new copy of this app, or move your server from one host to another, then it is highly likely that automating this process is not the best use of your time.

The thing is, DevOps is not a black and white endeavor. Little in IT is. Think about the organizations you’ve known (and we’ve all known them) that tried to standardize on a single language or a single database. It rarely works out, not because the decision to make such a move wasn’t serious, but because the needs of the business trump the desires of IT management to focus skill sets. DevOps is trying to simplify a complex environment with a high rate of change. That’s hard enough, don’t shoot for automating everything that moves.

Think of each automation you write as a liability. I know that sounds weird and counter to the current DevOps thinking, but each script, like it or not, will be dependent upon the (changing) environment it runs in. Unless you are in one or two organizations that I’ve worked with who have abstracted their entire infrastructure (with a huge man-hour investment, I might add), each change in your architecture is reflected in maintenance costs for existing automation. Most of the time, this cost is worth it, but ignoring this fact will drag your IT operations down, even while you are improving things. The best you can hope for by automating little-used processes is reduced ROI from your efforts. The worst could be a nightmare of perpetually out of date scripts that have to be modified just about every time they’re used.

Tools are coming that will ease this pain – a lot – that are more focused than what’s available today. The thing is, until they’re ready and your staff has time to learn them, they won’t help. And like any new market, it could take a while for this one to shake out. So for the near term, just weigh the cost/benefit equation for each process you want to automate. If it saves operator man-hours, is used frequently, and is not too terribly complex, that’s probably where you want to start.

Yes, we’re still talking “low-hanging fruit”. That is always the best place to start, and it gives you the biggest return on your man-hours.

After all, isn’t the point of this whole exercise to get more time at the beach?

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is currently a Senior Solutions Architect at StackIQ, Inc. He is also working with Mesamundi on D20PRO, and is a member of the Stacki Open Source project. He has experience in application development, architecture, infrastructure, technical writing, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound cha...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at, explored the value of Kibana 4 for log analysis and provided a hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He examined three use cases: IT operations, business intelligence, and security and compliance. Asaf Yigal is co-founder and VP of Product at log analytics software company In the past, he was co-founder of social-trading platform Currensee, which was later acquired by OANDA. He was also an early employee of server performance-monitoring company...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction...
Continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things trend. To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
Internet of @ThingsExpo, taking place June 7-9, 2016 at Javits Center, New York City and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists looked back at what DevOps has become, and forward at what it might create next.
SYS-CON Events announced today that TechTarget has been named “Media Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets.
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. O'Reilly spreads the knowledge of innovators through its technology books, online services, research, and tech conferences. An active participant in the technology community, O'Reilly has a long history of advocacy, meme-making, and evangelism.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show how to distribute them to any kind of consumer, being it a customer or a data center.
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
Webair, a leading provider of Cloud Hosting, Colocation and Managed solutions, today announces that its Chief Technology Officer, Sagi Brody, will speak at Cloud Expo 2015 Silicon Valley, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, California. Cloud Expo 2015 Silicon Valley is a world-class conference that brings together thought-leaders and cutting edge practitioners in the cloud / utility computing, Big Data, Internet of Things (IoT), Software-Defined Data Center (SDDC), DevOps and Web Real-Time Communication (WebRTC) space, which, in addition to prest...
DevOps is a software development method that places emphasis on communications between Software Engineering, Quality Assurance and IT Operations (SEQAITO ) with the goal to produce software and services to improve, increase the operational performance for the Enterprise. Communications is key not only between the SEQAITO team members but also the communication between the applications and the SEQAITO team. How can an organization provide the human communication and the application communication to the SEQAITO team to ensure the successful development, deployment of the application?, the Predictive ELK (Elasticsearch, Logstash and Kibana) log analytics cloud service company, announced today that it was chosen to speak at DevOps Summit, which will take place on November 3-5 in Santa Clara, California. will explore the entire process that we have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. We will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the architectur...
Automating AWS environments is important for all businesses as it simplifies creation and setup of cloud resources, facilitates otherwise complex processes, and streamlines management. The benefits of automation are clear: accelerate execution, reduce human error and unwanted consequences, and increase the enterprise’s ability to rapidly adapt, all while reducing the overall cost of IT operations. In his session at 17th Cloud Expo, Patrick McClory, Director of Automation and DevOps at Datapipe, dives deep into the technical specifics of automation for AWS including a discussion of best pract...
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, he will cover the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and provide examples on how an integrated approach has helped major companies embrace a cloud first,...
In a recent research, Analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the architecture accomplishes these requirements. Lastly, they will review the gory details of the technolo...
Most everyone in Cloud IT circles has realized the power of containerization and that companies are adopting Docker containers at a remarkable rate. There are many good reasons for this, such as easily setting up dev/test scenarios (DevOps), and building out sophisticated, distributed computing clusters. But there are some deeper questions this talk will address from the Microsoft perspective. For example, what is the future of Windows in a containerized world? How will Windows and Linux work together in Azure?
In his session at @ThingsExpo, Ben Bromhead, CTO of Instaclustr, will walk you through the basics of building an IoT-based platform leveraging Cassandra, Spark and Kafka. This session is aimed at developers, admins and DevOps engineers who have to build, run and maintain high performance IoT platforms as well as data scientists/engineers who are sick of ETL and want to work with the most up to date information.