Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit

@DevOpsSummit: Blog Post

Automating the Automation Tools at Capital One | @DevOpsSummit #DevOps #Jenkins #Automation

Where we see these technologies and methodologies implemented, IT Operations teams are acting more like developers

Listening to his talk, it seems like George Parris and his team at Capital One aren't keeping "banker's hours." George is a Master Software Engineer, Retail Bank DevOps at Capital One. At the All Day DevOps conference, George gave a talk, entitled Meta Infrastructure as Code: How Capital One Automates our Automation Tools with an Immutable Jenkins, describing how they automated the DevOps pipeline for their online account opening project for Capital One, a major bank in the United States. Of course, there is a lot to learn from their experience.

parris1.png

George started by pointing out that software development has evolved - coming a long way even in just the last few years. Developers now design, build, test, and deploy, and they no longer build out physical infrastructure - they live in the cloud. Waterfall development is rapidly being replaced by Agile, infrastructure as code, and DevOps practices.

Where we see these technologies and methodologies implemented, IT Operations teams are acting more like developers, designing how we launch our applications.  At the same time, development teams are more responsible for uptime, performance, and usability. And, operations and development work within the same tribe.

George used the Capital One Online Account Opening project to discuss how they automate their automation tools - now a standard practices within their implementation methodology.

parris2.png

For starters, George discussed how Capital One deploys code (hint: they aren't building new data centers). They are primarily on AWS, they use configuration management systems to install and run their applications, and they, "TEST, TEST, TEST, at all levels."  Pervasive throughout the system is immutability - that is, once created, the state of an object cannot change. As an example, if you need new server configurations, you create a new server and test it outside of production first.

They use the continuous integration/continuous delivery model, so anyone working on code can contribute to the repositories that, in turn, initiate testing. Deployments are moved away from the scheduled release pattern. George noted that, because they are a bank, regulations prevent their developers from initiating a production change.  They use APIs with the product owners to automatically create tickets, and then product owners accept tickets, making the change in the production code. While this won't apply to most environments, he brought it up to demonstrate how you can implement continuous delivery within these rules.

Within all of this is the importance of automation. George outlined their four basic principles of automation and the key aspects of each:

Principle #1 - Infrastructure as Code. They use AWS for hosting and everything is in a Cloud Formation Template, which is a way to describe your infrastructure using code. AWS now allows you to use CFTs to pass variable between stacks. Using code, every change can be tested first, and they can easily spin-up environments.

Principle #2 - Configuration as Code. This is also known as configuration management systems (they use Chef and Ansible). There are no central servers, changes are version controlled, and they use "innersourcing" for changes. For instance, if someone needs a change to a plugin, they can branch, update, and create a pull request.

Principle #3 - Immutability. Not allowing changes to servers once deployed prevents "special snowflakes" and regressions. Any changes are made in code and traverse a testing pipeline and code review before being deployed. This avoids what we all have experienced - the server that someone, who is no longer around, set up and tweaked differently than anything else and didn't document what was done.

Principle #4 - Backup and Restore Strategy. A backup is only as good as your restore strategy. You know the rest.

George also dives into how they do continuous delivery/continuous integration in his talk, which you can watch online here.

If you missed any of the other 30-minute long presentations from All Day DevOps, they are easy to find and available free-of-charge here.  Finally, be sure to register you and the rest of your team for the 2017 All Day DevOps conference here.  This year's event will offer 96 practitioner-led sessions (no vendor pitches allowed).  It's all free, online on October 24th.

More Stories By Derek Weeks

In 2015, Derek Weeks led the largest and most comprehensive analysis of software supply chain practices to date across 160,000 development organizations. He is a huge advocate of applying proven supply chain management principles into DevOps practices to improve efficiencies, reduce costs, and sustain long-lasting competitive advantages.

As a 20+ year veteran of the software industry, he has advised leading businesses on IT performance improvement practices covering continuous delivery, business process management, systems and network operations, service management, capacity planning and storage management. As the VP and DevOps Advocate for Sonatype, he is passionate about changing the way people think about software supply chains and improving public safety through improved software integrity. Follow him here @weekstweets, find me here www.linkedin.com/in/derekeweeks, and read me here http://blog.sonatype.com/author/weeks/.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.