Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog, Apache, FinTech Journal

@DevOpsSummit: Blog Post

DevOps and SQL Review By @Datical | @DevOpsSummit [#DevOps]

DevOps patterns are in the constant crusade to bring high-quality products to market faster

Automating SQL Review to Save Time and Money

I’ve spent the majority of my tech career in startups.  I love the fast pace, the opportunity to learn new things, and the sense of accomplishment that comes from bringing a successful new product to market.  I began my career in Quality Assurance.  In startups, you rarely enjoy the low ratio of Developers to QA Engineers that you might in a large enterprise.  As a QA engineer in a startup, your inbox is always much more full than your outbox. You are the last gate before the next release so you’re always under the microscope. In an early stage startup you are most likely also the “Customer Support” team, so when an issue is hit in production you become VERY popular.

As someone in that position, I always kept an eye out for the right tools to lighten my load without sacrificing any of my own personal quality standards for the work I was doing.  This is how I came across FindBugs about 10 years ago.  The first time I ran it and shared the output with the development engineers on my team they felt that the tool emitted more false positives or “nitpicky” patterns than true bugs.  But over time, as we tweaked and extended the checks performed to cover our specific needs and correlated the data from FindBugs with actual counts of bugs found in test and production, FindBugs became an integral part of our nightly and on-demand builds.  The reports were an excellent early indicator of potential issues and allowed developers to rectify misdeeds before we used up testing cycles or troubleshooting time in operations.  The developers on my team also started committing fewer and fewer infractions as the daily reminders they got from our build system helped them to change their bad habits into safer, better performing, more stable code. Release cycles shortened, product quality improved, and customer satisfaction rose proving that an ounce of prevention really is worth a pound of cure.

As Enterprise IT embraces agile development practices and adopts DevOps patterns in the constant crusade to bring high-quality products to market faster, DBAs are really starting to feel the pinch.  The description above of a QA Engineer in a software startup is apt.  With more frequent releases the DBA’s inbox of SQL scripts to write, review, modify or optimize is always more full than her outbox. The DBA is the last bastion of defense for data quality, data security, and data platform performance and is therefore under constant scrutiny. When there is a production outage, the DBA is among the first called to respond.

One of the most time consuming tasks for the Fortune 50 DBAs we work with is SQL review.  Some DBAs are allocating 70% of their time manually reviewing SQL scripts.  They are checking for the same things in SQL that tools like FindBugs are looking for in Java code: code patterns that indicate logical problems, security flaws, performance issues, and non-compliance to internally defined best practices or externally mandated regulations.

It’s clear that DBA’s need a tool that does for them what FindBugs did for my team a decade ago.  Static analysis for SQL is nothing new, but current offerings only go so far.  Typically, they evaluate the SQL statements with no contextual sensitivity. This omission severely limits the productivity and quality gains that can be achieved because so much of Database Lifecycle Management is being aware of Who is doing What, Where and When.  For example, an organization may allow privilege grants and INSERT statements in a TEST environment, but never allows such activity in an automated session in PROD. Any static analysis tool for SQL must take environmental parameters into consideration.

Also complicating matters is the nature of database ‘versioning.’  While your application is packaged, versioned and replaced wholesale from release to release, the database schema that supports your application is persistent and evolves over time.  What’s more, external compliance standards (SOX or PCI DSS for example) and internal audit requirements often dictate that incremental changes to the database be rigidly controlled and tracked in a well-defined process. This means the DBA must also confirm (through manual process and reviewing SQL for the appropriate comments) that the change can be traced to its cause and the application of the change can be traced through each environment.

The Datical DB Rules Engine was designed and implemented to satisfy the unique set of challenges posed by SQL review & static analysis.  Here are just a few of the reasons that Datical DB enables acceleration through static analysis safely and sanely.

  • Models Make for Powerful Evaluation – Datical DB abstracts the application schema into a strictly defined and validated object model. Authoring powerful rules is fast, straightforward and simple. Once they are written they are enforced every time a Forecast or Deploy is performed on any database in the lifecycle.
  • Environmentally Aware Change Validation - The model includes information about the client environment and various database instances in your applications lifecycle. Your rules can be written to allow maximum flexibility in early stage environments and maximum security in sensitive environments simultaneously.
  • Easily Confirm Internal & External Audit Requirements – In Datical DB, everything you need to remain in compliance with external and internal audit requirements is tied tightly to individual changes in the Data Model.  Manual review to confirm auditability of change is replaced with automated checks that are executed every time you (or your automation frameworks) Forecast or Deploy.
  • Automatically Validate What’s Important to YOU - Provides the capability to customize analysis to cover internal best practices like naming conventions, SQL DOs and DON’Ts, and object dependency management
  • Automate The Boring Stuff. Get Back To The Fun Stuff - Like many static analysis tools for code, Datical DB integrates into your build and deployment systems in a few mouse clicks. Now every time you build or promote an application, Rules validations are performed and a report is generated for dissemination throughout the organization. Your DBAs, having considerably reduced the time they spent with eyes on the screen reading SQL, are concentrating on more strategic projects and problems.
  • Better Coding Means Fewer Bugs - DBAs author rules and share them with development.  Development then has a codified repository of what is and is not acceptable in their organization to work against. Fewer bugs escaping DEV saves time and money.
  • Increasing Operations Involvement In Database Development – The Rules Engine is tightly integrated with Datical DB Forecast.  This feature allows you to simulate database change without actually altering the target database.  When DBAs share their Rules with Operations, Operations can run nightly Forecasts against STAGE or PROD to ensure that what’s currently in DEV or TEST will comply with the stricter validations performed downstream, once again finding problems earlier in the lifecycle when they are cheaper and easier to fix.

More Stories By Pete Pickerill

Pete Pickerill is Vice President of Products and Co-founder of Datical. Pete is a software industry veteran who has built his career in Austin’s technology sector. Prior to co-founding Datical, he was employee number one at Phurnace Software and helped lead the company to a high profile acquisition by BMC Software, Inc. Pete has spent the majority of his career in successful startups and the companies that acquired them including Loop One (acquired by NeoPost Solutions), WholeSecurity (acquired by Symantec, Inc.) and Phurnace Software.

@DevOpsSummit Stories
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance process quality, compliance with organizational policies and implementation of information security and data privacy best practices. These transformation levers cover core areas such as Cloud Assessment, Governance, Assurance, Security and Performance Management. The transformation framework presented during this session will guide corporate clients in the implementation of a successful cloud solu...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are never going to stop. We're human. We miss bugs. Or we fat finger a command - deleting dozens of servers and bringing down S3 in US-EAST-1 for hours - effectively halting the internet. These things happen.