@DevOpsSummit Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Mehdi Daoudi

Related Topics: @DevOpsSummit, Apache

@DevOpsSummit: Blog Feed Post

Complete Install Automation By @DMacVittie | @DevOpsSummit #DevOps #Docker #Microservices

The days of weeks or months to spinning up new applications are long past.

The Power of Complete Install Automation

It used to take months to travel across the U.S. Or any sizable landmass for that matter. One of the few really well documented wagon trains took four months to travel from Iowa to Montana… A trip that takes an airplane about four hours today. And that airplane trip is a ton safer too. That’s the power of automation, and recent advances in IT have enabled a similar curve of improvement in deployment times.

The days of weeks or months to spinning up new applications are long past. We’re all living it, so we know that Lines of Business expect servers to come available in a timeline that even a few years ago was not considered feasible in most IT shops. And generally speaking, that’s a good thing. The fact is that first virtualization, and then cloud sped the provisioning process, meaning IT had the ability to actually spin up entire systems faster.

The thing is, not every part of the provisioning process has seen exponential improvement in delivery timelines. When hardware is needed, there can still be significant delays, and though there are vendors and Open Source projects working on it, networking is still a largely manual process in most IT shops. More importantly, the OS still has to be installed and configured to meet the operational and security requirements of the organization.

And that last point is still largely manual. It applies to both physical and virtual installations, and only some specialized cloud images avoid the need to perform an OS installation. Leaving most IT shops with collections of “golden images” and rules for configuration, or scripts for configuration that kind of put together what is needed.

But that piece shouldn’t be any more. We’ve come a long way from the days of waiting for ISOs to be delivered on CD-ROMs. We’ve been installing modern operating systems for decades now, full-on automation of installations is well past due. Oh yes, there definitely are some install tools out there that will help automate an OS install. They are either not designed for the purpose, or they rely on pre-configured golden images and normally require tweaking of settings post-install.

Part of the reason I am involved in the Stacki Open Source Installer project is simply that golden images become a thing of the past. Stacki uses ISOs and RPMs, with dynamically generated kickstart files to customize the install per machine. With Stacki you can configure your RAID array without sitting at BIOS prompts. You can customize partitions without sitting at the console and selecting “Manual configuration” for partitioning, and each machine will be set up with the networking, security, etc. options that you specify, without having to sit and wait for install screens or post-install scripts.

My take on this is simple. If you could set up servers in a few minutes from start to finish, be they physical or virtual, what could you do with the time saved? That’s key to the power of Stacki. It certainly standardizes installs and reduces human error, but longer-term, it makes installation (and reinstallation) of a machine so fast and efficient that you can focus on other things, knowing it’ll do it again, with a repeatable process, next time you need to add a machine.

You set the details of each machine via CSV files (we call it spreadsheet install, in honor of the cleanest way to edit CSV files), and then tell Stacki to install. The only time you touch the machine is to hit the power on button. Stacki does the rest. If you’ve given it spreadsheet configuration information for that machine, Stacki will use it. If you haven’t, Stacki will make intelligent choices and leave you with a usable machine that is fully installed, but not customized to your needs.

Being Open Source means you don’t have to take my word for it, grab a copy and try it for yourself. You’ll be pleasantly surprised at how complete it is. The core Stacki team has decades of automation experience under their belt, and indeed, the initial release of Stacki was gleaned from the installation code in StackIQ Boss, a fully automated Big Data and OpenStack installer being used in some of the worlds largest corporations. So it’s already been debugged in real life, even stress tested. You’ll find it measures up.

Visit the stacki website to get started.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. A tracing tool like Jaeger analyzes what's happening as a transaction moves through a distributed system. Monitoring software like Prometheus captures time-series events for real-time alerting and other uses. Grafeas and Kritis provide security polic...