Click here to close now.

Welcome!

DevOps Journal Authors: Liz McMillan, Carmen Gonzalez, Pat Romanski, Andreas Grabner, Clinton Wolfe

Related Topics: Cloud Expo, Java, SOA & WOA, Linux, Virtualization, Big Data Journal, DevOps Journal

Cloud Expo: Article

Wasted IT Resources

Resource consolidation through newer architectures and solutions like PaaS is a way to tackle this

David Rubinstein's post "Industry Watch: Be resilient as you PaaS" makes a very good point about underutilized hardware in IT data-centers.

Millions of enterprise workloads remain in data centers, where servers are 30% to 40% underutilized, and that's if they're virtualized. If not, they're only using 5% to 7% of capacity.

The reason for this?

Take, for example, servers that are spun up for a project two years ago that were never decommissioned, just sitting there, waiting for a new workload that will never come. And, because the costs of blades and racks went down, cheap hardware has led to a kind of data center sprawl.

So is IT buying new hardware for each new project and not recycling? Is this because projects do not clearly die? Do we need "time of death" announcements mandated on IT projects?

Late last year, the US Government Accountability Office reported that the Federal IT spends 70% of its IT budget on care for legacy systems. That's $56 billion! How many of those legacy systems are rusty old projects with low utilization and are over-consuming resources?

Who wants those old servers? They might only be 2 years "old," but each new project has its own requirements and trying to retrofit old hardware to shiny new projects is probably undesirable. A project has a budget and specifications. A project needs X number of machines, with Y gigabytes of RAM and numerous other requirements. Can second-hand hardware fit the bill? If it can, can we find enough of it? Nobody wants a Frankenstein's monster of a cluster or the risk that it would be difficult to grow the cluster in the future. Better to go with new servers.

IaaS
IaaS provides a path to a solution. It makes it easier to provision and decommission resources. Virtual machines can be sized to fit the task. Even if specific new hardware is required to meet the SLA (Service Level Agreement) demands of a new project, it can be recycled back in the available resources.

But does IaaS go far enough?

IaaS provides infrastructure to be used by projects. But as David Rubinstein points out, even virtualized machines are still 30% to 40% underutilized. This is because virtual machines are provisioned as a best guess of what might be required by the software that is intended to run on it. The business does the math of the traffic they hope to receive and then adds some padding to be safe. They pass the numbers to Operations who provision the machines. Once a machine is provisioned those resources are locked in.

Virtualized or not, adding and removing servers to a cluster can be logistically difficult if the application is not designed with scale in mind. When it comes to legacy IT, it is probably better to leave as-is than to invest more resources and risk in a project that is receiving no engineering attention.

Configuration Management
While IaaS is an essential building block of the cloud, it is not the solution. Pioneers in the age of DevOps have shifted the mindset from pets (knowing every machine by its cute name) to cattle (anonymous machines in named clusters). But even on the other side of this transition we are still focused on machines and operating systems.

Configuration management helps us to wrangle and make sense of our cattle. It whips our machines into some kind of uniform shape with the intention of moving them all in the same direction, but not without considerable sweat and tears. Snowflake servers that did not receive that last vital update, for reasons unknown, bring disease to the herd.

In Andrew C. Oliver's InfoWorld post "The platform-as-a-service winner is ... Puppet", he makes a good point that IT organizations are also "not a beautiful or unique snowflake".

Whatever you're doing with your IT infrastructure, someone else is probably doing the exact same thing. If what you're doing is so damn unique, then you're probably adding needless layers of complexity and you should stop.

Is configuration management the answer? It enables every organization to build beautiful unique snowflakes, but all the business actually wants is an igloo.

Thin Servers
Servers are getting thinner. There is shift from caring about the overhead of the operating systems to focusing on the processes. CoreOS is great example of this and there is also boot2docker on the development side. We will no doubt see many more flavors of this in the next few years.

If we encapsulate the complexity of software applications and services in containers then the host operating system becomes much simpler, less of a concern and more standardized. If we can find an elegant way to spread the containers across our cluster then we only need to provision a cluster of thin standardized servers to house them.

Containers
Why Containers? Why not virtual machines or hardware?

Filling a cluster with virtual machines, where each virtual machine is dedicated to a single task is wasteful. It's like trying to utilize the space in a jar by filling it with marbles, when we could be filling it with sand. Depending on your hardware and virtual machine relationship, it might even be like filling it with oranges.

Containers are also portable - and not just in a deployment sense. In a recent talk on "Google Compute Engine and Docker", Marc Cohen from Google spoke about how they migrate running GAE instances from one datacenter to another, while only seeing a flicker of transitional downtime. vSphere also has similar tools for doing this with virtual machines. Surely, it is only a matter of time before we see this commonly available with Linux containers.

Clusters
We need clusters; not machines, not operating systems. We need clusters that support containers. Maybe we need "Cluster-as-a-Service." Actually, maybe we don't need any more "-as-a-Service" acronyms.

Just as we have stopped caring about the unique name of individual servers, we should stop caring about the details of a cluster. When I have a machine with 16Gb of RAM, do I need to care if it's 2x8Gb DIMMS or 4x4Gb? No, I only need to know that this machine gives me "16Gb of RAM". Eventually we will view clusters in the same way.

Now that we can fill jars with sand instead of oranges, we can be less particular about the size and shape of the jars we choose to build our cluster. Throwing away old jars is easier when we can pour the sand into a new jar.

The Why
I believe we always should start with "why." Why am I writing a script to run continuous integration and ensure I always have the latest version of HAProxy installed on all the frontend servers? Why are we buying more RAM and installing Memcached servers?

The "why" is ultimately a business reason - which is a long way from the command-line prompt.

Todd Underwood's excellent talk on "PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability" highlights the SLA as the contract that is made with the business and ensures that the Operations team is delivering the throughput, response time and uptime that the business expects. Beyond that they have free reign to ensure that the SLA is met.

Too many SysAdmins and Operations Engineers are stuck in the mindset of machines and operating systems. They have spent their careers thinking this way and honing their skills. They focus too much on the "what" and the "how", not on the "why".

At some level these skills are vital. The harder you push, the more likely it is that you will find a problem further down the stack and have to roll up your sleeves and get dirty.

Operations teams are responsible every level of stack, from the metal upwards. Even in the age of "the cloud" and outsourcing, an Operations team that does not feel that they themselves are ultimately responsible for their stack will have issues at some point. An example of this is Netflix, who use Amazon's cloud infrastructure service and may actually understand it better than Amazon. They take responsibility for their stack very seriously and because of this they are able to provide an amazingly resilient service at incredible scale.

Focusing solely on machines and operating systems does not scale and too many IT departments find it difficult to transcend this. Netflix are a good example here too. When you have tens of thousands of machines you are not going worry about keeping them all in sync or fixing machines that are having issues unless it is pandemic. Use virtualization, create golden images, if a machine is limping - shoot it. This is how Netflix operates. And to ensure that their sharp-shooters are always at the top of their game, Netflix uses tools like Chaos Monkey. They are purposely putting wolves amongst their sheep.

DevOps is a movement that aims to liberate IT from its legacy mindset. It aims to step back and take a look at the bigger picture. View business needs. Address the needs that span across the currently siloed Dev and Ops. This may be through fancy new orchestration and CI tools or be through organizational changes.

Tools and infrastructure are evolving so quickly that the best tool today will not be the best tool tomorrow. It is difficult to keep up.

Todd Underwood said that Operations Engineers, or rather Site Reliability Engineers, at Google are always working just outside of their own understanding. This is the best way to keep moving forward. From this, I take that if you fully understand the tools you are using, you are probably moving too slowly and falling behind. Just do not move so quickly that you jeopardize operational safety.

As long as we focus on "why", we can keep building our stack around that purpose instead of around our favorite naming convention, our favorite operating system and favorite toolset.

PaaS
PaaS is currently the best tool for orchestrating the container layer above the cluster layer. It manages the sand in your jars. ActiveState's PaaS solution, Stackato, also provides a way to manage the human side of your infrastructure. By integrating with your LDAP server, providing organizational and social aspects you can see your applications as applications, rather than as infrastructure.

As Troy Topnik points out in a recent post IaaS is not required to run Stackato. Although an IaaS will make managing your cluster easier as you grow.

But will an IaaS help with managing your resources at the application level? No. Will it help identify wasted resources? No. Will it put you directly in touch with the owners of the applications? No. But PaaS will.

Visibility
The reasons for waste in IT organizations are many. One reason is the way resources are provisioned for projects. The boundaries of a project are defined too far down the stack, even though a project may simply be defined by a series of processes, message queues and datastores. Another reason for waste is bad utilization of resources from the outset - not building the infrastructure where resources can be shared. A third reason for waste is lack of visibility into where waste is occurring.

Lack of visibility comes from not being able to see or reason about machines and projects. Machine responsibilities may be cataloged by IT, but will it be visible by all involved? Building the relationship between machines, processes and projects is still a documentation task without PaaS.

If redundant applications are not visible to the entire organization, then those accountable for announcing "time of death" will less likely make that call. Old projects which should be end-of-lifed will continue to over-consume their allocated resources - resources that were allocated with the hope of what the project may one day become.

Conclusion
Nobody likes waste. Resource consolidation through newer architectures and solutions like PaaS is a way to tackle this. PaaS brings more visibility to your IT infrastructure. It exposes resources at the application level. It spreads resources evenly across your cluster in a way that enables you to scale up and down with ease and utilizes all your hardware efficiently.

Source: ActiveState, originally published, here.

More Stories By Phil Whelan

Phil Whelan has been a software developer at ActiveState since early 2012 and has been involved in many layers of the Stackato product, from the JavaScript-based web console right through to the Cloud Controller API. He has been the lead developer on kato, the command-line tool for administering Stackato. His current role at ActiveState is Technology Evangelist.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, it is now feasible to create a rich desktop and tuned mobile experience with a single codebase, without compromising performance or usability.
SYS-CON Events announced today Arista Networks will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Arista Networks was founded to deliver software-driven cloud networking solutions for large data center and computing environments. Arista’s award-winning 10/40/100GbE switches redefine scalability, robustness, and price-performance, with over 3,000 customers and more than three million cloud networking ports deployed worldwide.
Application metrics, logs, and business KPIs are a goldmine. It’s easy to get started with the ELK stack (Elasticsearch, Logstash and Kibana) – you can see lots of people coming up with impressive dashboards, in less than a day, with no previous experience. Going from proof-of-concept to production tends to be a bit more difficult, unfortunately, and it tends to gobble up our attention, time, and money. In his session at DevOps Summit, Otis Gospodnetić, co-author of Lucene in Action and founder of Sematext, will share the architecture and decisions behind Sematext’s services for handling larg...
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, will explain the best practices of continuous testing at high scale, which is relevant to small scale DevOps, and if there is an expectation of growth as the number of build targe...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being reserved to the largest, most complex application stacks.
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driving the company’s cloud security strategy, roadmap and implementation in support of corporate innova...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, Capital One now has 500+ Agile Teams delivering quality software via Agile and DevOps practices.
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend the time metric, the DevOps cadence reinvents project scope, and cost metrics expand past software ...
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mockups and enhance them all the way through functional prototypes, to final working applications. Lea...
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driving the company’s cloud security strategy, roadmap and implementation in support of corporate innova...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being reserved to the largest, most complex application stacks.
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed how IT organizations can automate just-in-time assembly of application environments - each built fo...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
SYS-CON Media announced today that XebiaLabs launched a popular blog feed on DevOps Journal with close to 2,000 story reads in less than a day. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done.
XebiaLabs has announced record growth and major highlights for 2014. These milestones include: Triple-digit worldwide revenue growth. Record number of new customers including 3M, Allianz, American Express, Credit Agricole, Digital Globe, Electronic Arts, EMC, Expedia, Fandango / NBC Universal, GATX, General Electric, ING, KPMG, Liberty Mutual, Natixis, Paychex, Providence Health, Societe General, Thomson Reuters TIIA-CREF and Umpqua Bank.
SYS-CON Events announced today that SoftLayer, an IBM company, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015 at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place November 3–5, 2015 at the Santa Clara Convention Center in Santa Clara, CA. SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from ...
RingCentral Inc., has unveiled the RingCentral Connect Platform™, a set of tools and services to build, deploy, and manage custom integrations using RingCentral APIs. With this platform, developers can build out-of-the-box integrations with RingCentral to add powerful communication capabilities to business applications. "This is a significant step forward because businesses want to better connect with customers – and communications is at the heart of every customer interaction," said Vlad Shmunis, CEO, founder and chairman of RingCentral. "In the past, integrating communications into busine...
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Qubell announced on Monday the availability of the first autonomic application management platform for cloud applications. Qubell enables applications to become adaptive, self-managed services that configure, heal, optimize and protect themselves in response to changes within dynamic cloud environments. With Qubell, managing applications as an autonomic system dramatically accelerates application release cycles, improves quality, eliminates manual operations and reduces outages and operating costs. The platform is designed for a broad range of web, commerce and big data applications with an em...
Basis Technologies, leaders in streamlining, accelerating and optimizing SAP systems, today announces the launch of theirDevOps toolset, focused on helping companies running SAP move from slow, linear, and manual development to safer, more agile and responsive development and test workflows. Building on the success of their market-leading Transport Express solution, the DevOps toolset comprises 4 flexible solutions that address specific goals of DevOps; Transport Expresso, ShiftLeft, DevMax and SnapOps.
SYS-CON Events announced today Sematext Group, Inc., a Brooklyn-based Performance Monitoring and Log Management solution provider, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), search analytics (SSA), and search enhancement. They also provide exceptional Search and Big Data consulting services a...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.