@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski

Related Topics: @DevOpsSummit

@DevOpsSummit: Blog Feed Post

Creating a Deployment Model for Scripted DB Updates

It is common for enterprises to work with hundreds of applications

Creating a Deployment Model for Scripted DB Updates
By Dave Roberts

It is common for enterprises to work with hundreds of applications, each of those calling for a number of database scripts to be executed in a particular order. Needless to say, the execution and organization of these scripts can become complex and tedious. Instead of creating new steps for every application release, modeling your scripted database updates will organize your script execution into an automated, repeatable process that allows for quick changes and stability within the release.

Your application may call for a number of scripts to be executed in a particular order, where the flow bounces from one schema to another, then back to the first, then to another one altogether. Take close look at the example below:

Let’s look at how to model the above scenario in XL Deploy.

To begin, we’ll model the infrastructure.

Each schema is modeled by a database client that connects to that schema on the proper database. Each schema could be in the same database or in different databases. Although the database is one of the properties of this client object, the schema is not, so we’ll rely on tag-matching to get the correct scripts deployed to it.


Next, we’ll create the second part of our model, an Environment with the following containers as members:

Finally, the third part of our model is the application package. We are starting with a repo with a number of database script or SQL files, along with a mapping.txt file that gives us a mapping and ordering between file and schema.

Here the mapping tells us to run the 1*.sql files against schema A, then the 2*.sql files against schema B, and so on, to produce the deployment step list we started with.

But remember that XL Deploy doesn’t deploy individual scripts, rather it handles packages of scripts, and within each package, the scripts are executed in lexicographical order.  See https://docs.xebialabs.com/xl-deploy/concept/database-plugin.html.


Enterprise-scale Application Release Automation

Learn about XL Deploy and how it’s Model-based deployment approach can organize your deployments into an automated repeatable process.





To package up this repository into objects that XL Deploy can handle, we’re going to run a little packager script that interprets the mapping file and bundles the scripts appropriately. And, we’re going to run it all under Jenkins to take advantage of the XL Deploy post-build action to create a Deployment Package and publish it to XL Deploy’s repository. In the course of that, we’ll place tags on the deployable objects we create in order to facilitate the correct mappings. We can also invoke a deployment from Jenkins, but we’ll stop short of that so we can analyze the resulting mappings in detail.

So, first, the packager script:

while read NUM SCHEMA
  echo $NUM $SCHEMA
  mkdir sql-obj-$SEQ
  cp $NUM*.sql sql-obj-$SEQ
  cd sql-obj-$SEQ
  zip ../sql-obj-$SEQ.zip *.sql
  SEQ=`expr $SEQ + 1`
done < $1

Jenkins will invoke it with two parameters. $1 is the mapping.txt file. As we saw above, it is committed to the same source-control repo as the SQL files. And $2 is an output file, buildVariables.properties, which we’ll pass to the Jenkins Inject-Variables plugin to help tag our objects later.

Of course, we could get much more sophisticated here as to how we group the source *.sql files into bundles, perhaps using number ranges or regex matching, but this simple script conveys enough of the idea.

The result of this script is a zip file for each line of the mapping file, numbered sequentially:  sql-obj-1.zip, sql-obj-2.zip, sql-obj-3.zip, sql-obj-4.zip, and sql-obj-5.zip. Each zip contains the SQL scripts we instructed the packager to include. In this case, the first argument on the mapping line is matched against the first character of the SQL file’s name, so we bundle scripts 10-13 into the first object, 20-23 into the second object, and so on.

Note that the matching-number here (NUM) and the sequence number (SEQ) are independent — for this example they happen to coincide, but I’ll show later an example where they don’t.

Also resulting from the script is the buildVariables.properties file, where the right side of each assignment is a tag for XL Deploy:


We use the Inject-environment-varibles plugin (https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin) to set variables holding the tag values.

Finally, we come to the post-build action that creates an application package. We set the app name and version:

And then we have a sql.SqlScripts deployable for each line in the mapping.txt file that makes use of an injected variable for its tag. Here is the first one of the five needed for this example.  Change the ‘1’  in the Name, Tags variable, and Location to ‘2’ for the next one, and then to ‘3’, ‘4’, and ‘5’ for the rest.

Key to our ordering scenario is that these artifacts will be executed in order by the Name field, sql-obj-1, sql-obj-2, etc., while the individual scripts within each one will be ordered by their file names, 10-select.sql, 11-select.sql, etc.


Note a limitation here: we have to have one deployable for each line in mapping.txt, and we don’t know how many we will need ahead of time. An easy workaround is to take a liberal guess as to your need and configure a fixed number of objects. The unused ones will be empty and tag-less so they won’t play a role in the deployment. A dynamic approach that adds a variable number of deployables is beyond the scope of this post.

When we run the Jenkins build, we will get the next package into XL Deploy‘s repo, then the drag-and-drop deployment action will give the mapping and ordering we expect:

Finally, let’s change the mapping like this to show the independence of the matching and sequencing numbers. This mapping yields the result shown, demonstrating just how flexible this approach is!


The post Creating a Deployment Model for Scripted DB Updates appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@DevOpsSummit Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Enterprises are universally struggling to understand where the new tools and methodologies of DevOps fit into their organizations, and are universally making the same mistakes. These mistakes are not unavoidable, and in fact, avoiding them gifts an organization with sustained competitive advantage, just like it did for Japanese Manufacturing Post WWII.
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
This is going to be a live demo on a production ready CICD pipeline which automate the deployment of application onto AWS ECS and Fargate. The same pipeline will automate deployment into various environment such as Test, UAT, and Prod. The pipeline will go through various stages such as source, build, test, approval, UAT stage, Prod stage. The demo will utilize only AWS services including AWS CodeCommit, Codebuild, code pipeline, Elastic container service (ECS), ECR, and Fargate.
The current environment of Continuous Disruption requires companies to transform how they work and how they engineer their products. Transformations are notoriously hard to execute, yet many companies have succeeded. What can we learn from them? Can we produce a blueprint for a transformation? This presentation will cover several distinct approaches that companies take to achieve transformation. Each approach utilizes different levers and comes with its own advantages, tradeoffs, costs, risks, and outcomes.