@DevOpsSummit Authors: Pat Romanski, Stackify Blog, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Article

Data Demands of DevOps By @Delphix | @DevOpsSummit [#DevOps]

Technologies such as Chef, Puppet, and Docker have automated environment standup and configuration

Today, the demand for new applications is growing at an unprecedented rate throughout lines of business and across industries. Customer expectations for mobile and e-commerce capabilities are transforming software development speed and quality into a competitive differentiator for even the most unlikely businesses. For existing software development shops, the proliferation of platforms, increasing need for total global uptime, and accelerating pace of industry disruption by fast-paced startups have all put increased pressure on development. In every vertical, more code must be shipped than ever before, faster and at higher quality.

These pressures have forced a search for new methods, practices, and solutions that can allow organizations to accelerate application development and maintain quality standards, without any additional resources. The DevOps movement has shown particular promise in helping to meet these challenges, leading 43% of F1000 respondents in an IDC survey to adopt DevOps practices, while another 40% are investigating them.

What Is DevOps?
According to Gartner, DevOps is "not a market, but a tool-centric philosophy that supports a continuous delivery value chain." DevOps supports continuous delivery and a fast flow of features from concept to customer with tools that decrease feature friction and accelerate feedback at every phase of the process. These objectives are achieved with solutions that accelerate environment standup and enhance environment reproducibility.

Environment standup times are the key constraint in software development timelines. While code can be easily versioned, shared, and pushed with tools like Git, environment provisioning is a complex and manual process, requiring multiple touch points from administrators, and extensive, delicate configuration work. Business users have defined the needs and development teams have transformed their workflows, but everyone still waits on environments.

Environment reproducibility is the key constraint on friction and feedback, just as standup times are for initial code work. When dev and test environments are faithful copies of production, feature functionality is tested effectively and often. Errors are detected early and remediation is performed on the fly, preventing huge delays in the final testing stages or catastrophes in production. Features move seamlessly from environment to environment. Having parallel identical environments multiplies developer flexibility, allowing low-cost experimentation in both Dev and Test phases. But until recently, dev and test environments could not be such faithful copies.

The DevOps ecosystem of tools is transforming that landscape. Technologies such as Chef, Puppet, and Docker have automated environment standup and configuration. This automation both accelerates environment standup and enhances environment consistency. Replacing manual configuration tasks with automated processes reduces the load on ops staff and accelerates standup timelines, while automating or containerizing app states ensures that each developer or tester is working on an identical environment, maximizing consistency.

The Data Gap
However, even organizations with cutting-edge DevOps practices are finding that standup and reproducibility constraints still apply to data.

A tool like Docker may be able to stand up a lightweight application instance with consistent configuration, using minimal hardware and requiring no ops time. However, applications require data, and not only when they are deployed. Dev and test environments require full and faithful copies of production data. And they need that data to be delivered at the same pace and with the same automation as VMs are configured and cloud infrastructure is made available.

Current data management technology is not up to the challenge. With existing solutions, you can have your data slowly, at poor quality, or both.

If high quality data is the highest priority, organizations can opt to create full clones of production data. But these processes take as much or more time than all other stages of environment setup. In order to get a full clone of production, a backup admin has to get data out of production, system and storage administrators need to authorize and set up infrastructure, and (if the data is relational) a DBA must set up the database. Since the copy is full, infrastructure constraints will keep the number of available copies for experimentation or on-demand testing down. And the slow process timeline has a negative impact on data quality as well. In a continuous deployment world, the features in production today will not be the same as those in production last month or even week. Data changes even faster. So even a perfect copy of production weeks or months ago - and traditional data management techniques will take that long - is a poor approximation of data today. And features succeed or fail depending on their interaction with current data.

If, on the other hand, rapid access to data is the priority, organizations can employ shared data environments. Theoretically, sharing provides efficiency benefits by giving multiple teams immediate, concurrent access to a common data environment. But in practice, conflicts occur when more than one stakeholder contends for the same resources at the same time. The result is often a low quality, chaotic environment in which data changes from different projects collide with each other, yielding unreliable code and untrustworthy tests.

Solutions like subsetting or synthetic data are often also mentioned in discussions about providing data to developers and testers. However, they do not address the need for full and faithful production copies at all. By definition, a subset or a synthetic data set is not an accurate copy of production. That means that testing on a full production copy must be relegated to a special pre-production phase of the SDLC, which undermines the DevOps emphasis on consistent environments, regular tests, and continuous adjustments to hit project targets.

DevOps Data Tools
With challenges like these, a new set of data management tools are required, which will bring data delivery capabilities up to speed with DevOps needs. In the second part of this series, we'll look at the new category of solutions developing to meet this need.

More Stories By Louis Evans

Louis Evans is a Product Marketing Manager at Delphix. He is a subject-matter expert developing content, surveys and best practices pertinent to the DevOps community. Evans is also a speaker at DevOps focused industry events. He is a graduate of Harvard College, with a degree in Social Studies and Mathematics.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@DevOpsSummit Stories
How is DevOps going within your organization? If you need some help measuring just how well it is going, we have prepared a list of some key DevOps metrics to track. These metrics can help you understand how your team is doing over time. The word DevOps means different things to different people. Some say it a culture and every vendor in the industry claims that their tools help with DevOps. Depending on how you define DevOps, some of these metrics may matter more or less to you and your team.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
@DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises - and delivering real results.