Welcome!

@DevOpsSummit Authors: Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White, Dana Gardner

Related Topics: @DevOpsSummit

@DevOpsSummit: Blog Feed Post

Continuous Delivery, Real-Time Test Analysis By @XebiaLabs | @DevOpsSummit [#DevOps]

You have to plan your Continuous Delivery pipeline with quality in mind from the outset

Why Continuous Delivery is Nothing Without Real-Time Test Analysis

By Victor Clerc

TL;DR: There is no point shipping fast if everything is broken!

Pushing frequent releases of high quality software to customers is beneficial for everyone. There’s a great deal of business value in delivering software to the market faster than your competitors. Once realized, ideas can be rejected or pursued quickly. Customers can see their feedback shaping the product. Flaws, defects, and omissions can be addressed without delay.

But, setting up a Continuous Delivery pipeline is about more than speed. How do you ensure that things don’t start breaking all over the place?

It’s an established fact that the later we encounter a defect or a problem in software development, the more expensive and difficult it is to fix. Accurately measuring quality and building it into the pipeline may seem like a lot of upfront effort, but it will pay dividends quickly.

Step 1: Build solid foundations

You have to plan your Continuous Delivery pipeline with quality in mind from the outset. The only way to effectively do that is to design tests before development really begins, to continually collect metrics, and to build a test automation architecture integrated into your Continuous Delivery pipeline. The test automation architecture defines the setup of test tools and tests throughout the pipeline and should support flexibility and adaptability to help you meet your business objectives.

This test automation architecture should facilitate easy selection of just the right tests to be performed to match the flow of your software through your pipeline as it gets closer to release. From unit tests and code level security, through functional and component testing, to integration and performance testing. The further the software progresses along the pipeline, the greater the number of dependencies on other systems, data, and infrastructure components – and, the more difficult it is to harmonize these variables. Making sane selections of tests to run at each moment in the pipeline becomes more and more important in order to focus on the area of interest.

Step 2: Fail fast

If we wait until functionality has been developed to write automated functional tests, we are creating a bottleneck. Designing tests during or at the end of a sprint is delaying feedback that we need right now. Why not apply the same test-driven development mindset we employ for unit tests to functionality? So, design all tests as soon as possible: refining the backlog for subsequent sprints should include discussing and designing tests.

The development team is greatly helped when for each piece of functionality a set of examples of expected system behavior is available. These examples, with some syntactic guidelines and using readily available tools, can serve as automated acceptance tests. Starting with failing tests and working to eliminate them maintains a laser focus on what developers should be doing. For this to work well, all stakeholders must get together at the beginning to design the acceptance tests.

The alternative is a drip-feed of failure as new stakeholders climb on board with fresh concerns as the software gets closer to release. The later a failure occurs, the more disruptive it is, and the more difficult it is to do anything to alleviate the relevant stakeholder’s concerns. Pull everyone in at the start, design the tests together, run the most important tests first, and start collecting the feedback you need about software quality from day one.

Step 3: Automate and optimize the process

A concrete set of acceptance criteria in the form of automated tests gives you a clear signal about the health of your software. The easier it is to run these tests, the better, which is why you should automate where possible. There will be some intangibles that will prove tough to automate, but you may need them for a true measure of quality. Not all acceptance tests can or should be automated, but automating functional tests makes a great deal of sense.

It will be necessary from time to time to add new example test cases in to augment the existing functionality. It will also be necessary to optimize the tests available in the regression set to prevent this set from growing and containing superfluous tests.

You can’t test everything all the time. Be pragmatic when you define which tests to run in any given situation. You need flexibility to select the right tests for inclusion and exclusion according to a variety of quality attributes.

Step 4: Maintain a real-time quality overview

If a test is going to fail, it’s better to know about it now. All the stakeholders should know about failures as soon as possible. If the software is passing your tests and meeting the acceptance criteria then you can feel confident about the overall quality and proceed with the release.

Keeping things specific with examples and creating these acceptance tests as early as possible will increase the efficiency of your software development process. Furthermore, when possible reasons for failing tests can be identified through adequate monitoring, this may help to correct defects as soon as possible. The predictive value of trend analysis can also help you to take adequate measures before quality thresholds, such as a maximum duration for test execution, are breached.

Automated continuous qualification of all relevant test results (across all relevant test tools) is what enables you to provide real-time quality feedback. With this real-time quality feedback built into your Continuous Delivery pipeline, you can ensure that your standards never slip.

Join the XL Test beta programme!

We have built XL Test to allow you to achieve these steps. If you’re interested and would like to join our beta programme, please drop us an email. Start making sense of all your test data and get real-time quality feedback built into your Continuous Delivery pipeline today!

The post Why Continuous Delivery is Nothing Without Real-Time Test Analysis appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@DevOpsSummit Stories
When you're operating multiple services in production, building out forensics tools such as monitoring and observability becomes essential. Unfortunately, it is a real challenge balancing priorities between building new features and tools to help pinpoint root causes. Linkerd provides many of the tools you need to tame the chaos of operating microservices in a cloud native world. Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput.
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's award-winning solutions intelligently collect data, and unify backup, disaster recovery, archival and governance capabilities onto a single, optimized data set. As the industry's fastest growing data protection provider, Druva is trusted by over 4,000 global organizations, and protects over 40 PB of data. Join the conversation at twitter.com/druvainc
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions: - How does application security work on this platform? What all do I need to secure? - How do I implement security in pipelines? - What about vulnerabilities discovered at a later point in time? - What are newer technologies like Istio Service Mesh bring to table?In this session, I will be addressing these commonly asked questions that every enterprise trying to adopt an Enterprise Kubernetes Platform needs to know so that they can make informed decisions.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.