Click here to close now.




















Welcome!

@DevOpsSummit Authors: AppDynamics Blog, XebiaLabs Blog, Daniel Khan, SmartBear Blog, Automic Blog

Blog Feed Post

Automation: More than saving keystrokes

It wasn’t that long ago that networking talk was all about the cost of equipment. With CapEx as the primary pain point, everyone was talking about merchant vs. custom silicon, with the primary argument being that a move to common components would provide margin relief in what has to be the most margin-sensitive industry in tech.

Now, with the whole world seemingly converged on a narrow set of silicon (congratulations, Broadcom), the conversation has been shifting. It started with a subtle expansion of the cost argument to include more than just CapEx. OpEx has always been around, but it is getting more play of late in marketing circles. And the OpEx argument itself is becoming more fully fleshed out. Where companies used to tout the easily measured stuff like rack space, power, and cooling, increasingly the discussion floats over into the more operational aspects of managing a network.

We are at the point now that automation is the new god to which we all must pay homage. But are we tossing around the automation word a little loosely?

First off, we should all be clear about something: automation is not about saving keystrokes. Sure, as a result of a highly-automated, well-orchestrated infrastructure, you might in fact put fingers to keyboard a little less frequently. But automation ought not be done with the sole objective of typing less.

The problem here is that the things that people best understand how to automate are relatively simple tasks that are annoying to execute. Typing in the same command 27 thousand times appears to be the ideal candidate for automation. And in response, some hacker in the organization figures out how to replace the command with a small shell script or equivalent. What used to take 13 minutes to execute now takes somewhere on the order of 7 seconds. Multiplied by the 27 thousand instances in a typical year, the time savings is both quantifiable and quite attractive. “We should do more of that!” proclaims the CIO.

And off the team goes to identify more of these commands.

But there aren’t that many commands that are repeated ubiquitously, uniformly, and in enough volume to really make a difference to the bottom line. Once you retire a couple of heavy hitters, eking out continual OpEx savings by “automating all the things” becomes harder and harder. Why?

This form of automation preys on the repeatably identical task. When something is done the exact same way every single time, regardless of context (either situation or environment), it is well-suited for being replaced with an easier-to-execute task. But as soon as the task requires some cognitive input from the operator (knowing when, where, or how to do something), this type of automation is far less powerful.

It is tempting to attribute this only to things that are shell scriptable, but the world of automation includes way more. We all know companies that are still managing infrastructure with expect scripts. When we bring up this type automation, whoever is speaking almost always oozes a little bit of derision, because we all know that this type of thing is primitive.

But is combing output for fields really that different from applying templates to configuration?

When you provision devices based on some template, you are really just pattern matching (isn’t that what expect does?) and then applying some formulaic logic. But somehow if you can sprinkle in the phrase DevOps or toss around one of the sexy provisioning tools (Chef, Puppet, Salt, or if you are particularly in the know, maybe Ansible), it seems a whole lot more substantial, doesn’t it?

My point here is not to put down the DevOps tools. Instead, I want to point out that how these tools are used is important. If you view tools like Chef or Ansible as a means of cutting out keystrokes (read: pushing config), then you are likely missing the point of automation.

What these types of tools are really trying to do is much more profound. The power goes well beyond putting an agent on a device and then pumping that device full of config. What these tools are doing is allowing you to create logic (some of it more sophisticated than others) to make intelligent decisions about how to provision a device.

For example, a switch might behave differently depending on what is attached to it. We all know about the role of edge policy (VLANs, ACLs, QOS, and so on) as it relates to managing traffic on the network. So if a top-of-rack switch is attached to one type of device (or VM or application), you might want one behavior, and if it is something else, maybe you want a different type of behavior. It is not just the configuration; it is the right configuration based on the particular context.

This combination of context and intelligence is what makes automation powerful. And the more context that is available and actionable, the more fancy you can get with the automation.

This means that whatever automation framework you are using (anything from shell script to full DevOps environment) must be capable of both performing an action, and pulling in information to establish context for that action. We are quite focused on the first part, but the context is what will make automation more or less powerful. The act of executing a sequence of activities is interesting, but having logic to determine what to do is paradigm-changing.

Put differently, if your automation infrastructure is only capable of making left turns regardless of what is happening on the roads, no matter how elegant or fast the turns, you will still end up going in circles.

[Today’s fun fact: Babies are born without knee caps. They don’t appear until the child reaches 2 to 6 years of age. Creepy.]

The post Automation: More than saving keystrokes appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usage of their services for licensing and billing purposes? In his session at 16th Cloud Expo, Delano ...
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating the organization can lead to inconsistent masking and exposure of sensitive data. But some organ...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is relevant to small scale DevOps, and if there is an expectation of growth as the number of build targets,...
"ProfitBricks was founded in 2010 and we are the painless cloud - and we are also the Infrastructure as a Service 2.0 company," noted Achim Weiss, Chief Executive Officer and Co-Founder of ProfitBricks, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
"Alert Logic is a managed security service provider that basically deploys technologies, but we support those technologies with the people and process behind it," stated Stephen Coty, Chief Security Evangelist at Alert Logic, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
"We specialize in testing. DevOps is all about continuous delivery and accelerating the delivery pipeline and there is no continuous delivery without testing," noted Marc Hornbeek, Sr. Solutions Architect at Spirent Communications, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction and risks they impose on the business.
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
Palerra, the cloud security automation company, announced enhanced support for Amazon AWS, allowing IT security and DevOps teams to automate activity and configuration monitoring, anomaly detection, and orchestrated remediation, thereby meeting compliance mandates within complex infrastructure deployments. "Monitoring and threat detection for AWS is a non-trivial task. While Amazon's flexible environment facilitates successful DevOps implementations, it adds another layer, which can become a target for potential threats. What's more, securing infrastructure and meeting compliance mandates i...
Delphix, the market leader in Data as a Service (DaaS), has been announced winner of the DevOps Solution Award at the prestigious Computing Vendor Excellence Awards in London. The awards celebrate the achievements of the technology vendors and service providers that are leading the field of enterprise IT. Delphix was recognised as the vendor demonstrating the most effective support of DevOps culture for its ability to improve time to market and collaboration between teams.
Sysdig has announced two significant milestones in its mission to bring infrastructure and application monitoring to the world of containers and microservices: a $10.7 million Series A funding led by Accel and Bain Capital Ventures (BCV); and the general availability of Sysdig Cloud, the first monitoring, alerting, and troubleshooting platform specializing in container visibility, which is already used by more than 30 enterprise customers. The funding will be used to drive adoption of Sysdig Cloud in the container market.
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
"The new SDKs for Go and Java are yet another addition to our growing support for our DevOps community," said Achim Weiss, Co-founder and CEO of ProfitBricks. "Since the launch of ProfitBricks' DevOps Central, the productivity of the DevOps community remains a top priority for our development team. We've built a strong foundation for our DevOps Central users, and intend on continuing this momentum as the year progresses."
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance compl...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous challenge for development and operations teams. The Sumo Logic Collector and Application for Docker now a...
Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful. In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless of whether you are a startup or a mature enterprise, whether you are using PaaS, Micro Services, or ...
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.