|By Phil Whelan||
|May 4, 2014 08:00 AM EDT||
David Rubinstein's post "Industry Watch: Be resilient as you PaaS" makes a very good point about underutilized hardware in IT data-centers.
Millions of enterprise workloads remain in data centers, where servers are 30% to 40% underutilized, and that's if they're virtualized. If not, they're only using 5% to 7% of capacity.
The reason for this?
Take, for example, servers that are spun up for a project two years ago that were never decommissioned, just sitting there, waiting for a new workload that will never come. And, because the costs of blades and racks went down, cheap hardware has led to a kind of data center sprawl.
So is IT buying new hardware for each new project and not recycling? Is this because projects do not clearly die? Do we need "time of death" announcements mandated on IT projects?
Late last year, the US Government Accountability Office reported that the Federal IT spends 70% of its IT budget on care for legacy systems. That's $56 billion! How many of those legacy systems are rusty old projects with low utilization and are over-consuming resources?
Who wants those old servers? They might only be 2 years "old," but each new project has its own requirements and trying to retrofit old hardware to shiny new projects is probably undesirable. A project has a budget and specifications. A project needs X number of machines, with Y gigabytes of RAM and numerous other requirements. Can second-hand hardware fit the bill? If it can, can we find enough of it? Nobody wants a Frankenstein's monster of a cluster or the risk that it would be difficult to grow the cluster in the future. Better to go with new servers.
IaaS provides a path to a solution. It makes it easier to provision and decommission resources. Virtual machines can be sized to fit the task. Even if specific new hardware is required to meet the SLA (Service Level Agreement) demands of a new project, it can be recycled back in the available resources.
But does IaaS go far enough?
IaaS provides infrastructure to be used by projects. But as David Rubinstein points out, even virtualized machines are still 30% to 40% underutilized. This is because virtual machines are provisioned as a best guess of what might be required by the software that is intended to run on it. The business does the math of the traffic they hope to receive and then adds some padding to be safe. They pass the numbers to Operations who provision the machines. Once a machine is provisioned those resources are locked in.
Virtualized or not, adding and removing servers to a cluster can be logistically difficult if the application is not designed with scale in mind. When it comes to legacy IT, it is probably better to leave as-is than to invest more resources and risk in a project that is receiving no engineering attention.
While IaaS is an essential building block of the cloud, it is not the solution. Pioneers in the age of DevOps have shifted the mindset from pets (knowing every machine by its cute name) to cattle (anonymous machines in named clusters). But even on the other side of this transition we are still focused on machines and operating systems.
Configuration management helps us to wrangle and make sense of our cattle. It whips our machines into some kind of uniform shape with the intention of moving them all in the same direction, but not without considerable sweat and tears. Snowflake servers that did not receive that last vital update, for reasons unknown, bring disease to the herd.
In Andrew C. Oliver's InfoWorld post "The platform-as-a-service winner is ... Puppet", he makes a good point that IT organizations are also "not a beautiful or unique snowflake".
Whatever you're doing with your IT infrastructure, someone else is probably doing the exact same thing. If what you're doing is so damn unique, then you're probably adding needless layers of complexity and you should stop.
Is configuration management the answer? It enables every organization to build beautiful unique snowflakes, but all the business actually wants is an igloo.
Servers are getting thinner. There is shift from caring about the overhead of the operating systems to focusing on the processes. CoreOS is great example of this and there is also boot2docker on the development side. We will no doubt see many more flavors of this in the next few years.
If we encapsulate the complexity of software applications and services in containers then the host operating system becomes much simpler, less of a concern and more standardized. If we can find an elegant way to spread the containers across our cluster then we only need to provision a cluster of thin standardized servers to house them.
Why Containers? Why not virtual machines or hardware?
Filling a cluster with virtual machines, where each virtual machine is dedicated to a single task is wasteful. It's like trying to utilize the space in a jar by filling it with marbles, when we could be filling it with sand. Depending on your hardware and virtual machine relationship, it might even be like filling it with oranges.
Containers are also portable - and not just in a deployment sense. In a recent talk on "Google Compute Engine and Docker", Marc Cohen from Google spoke about how they migrate running GAE instances from one datacenter to another, while only seeing a flicker of transitional downtime. vSphere also has similar tools for doing this with virtual machines. Surely, it is only a matter of time before we see this commonly available with Linux containers.
We need clusters; not machines, not operating systems. We need clusters that support containers. Maybe we need "Cluster-as-a-Service." Actually, maybe we don't need any more "-as-a-Service" acronyms.
Just as we have stopped caring about the unique name of individual servers, we should stop caring about the details of a cluster. When I have a machine with 16Gb of RAM, do I need to care if it's 2x8Gb DIMMS or 4x4Gb? No, I only need to know that this machine gives me "16Gb of RAM". Eventually we will view clusters in the same way.
Now that we can fill jars with sand instead of oranges, we can be less particular about the size and shape of the jars we choose to build our cluster. Throwing away old jars is easier when we can pour the sand into a new jar.
I believe we always should start with "why." Why am I writing a script to run continuous integration and ensure I always have the latest version of HAProxy installed on all the frontend servers? Why are we buying more RAM and installing Memcached servers?
The "why" is ultimately a business reason - which is a long way from the command-line prompt.
Todd Underwood's excellent talk on "PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability" highlights the SLA as the contract that is made with the business and ensures that the Operations team is delivering the throughput, response time and uptime that the business expects. Beyond that they have free reign to ensure that the SLA is met.
Too many SysAdmins and Operations Engineers are stuck in the mindset of machines and operating systems. They have spent their careers thinking this way and honing their skills. They focus too much on the "what" and the "how", not on the "why".
At some level these skills are vital. The harder you push, the more likely it is that you will find a problem further down the stack and have to roll up your sleeves and get dirty.
Operations teams are responsible every level of stack, from the metal upwards. Even in the age of "the cloud" and outsourcing, an Operations team that does not feel that they themselves are ultimately responsible for their stack will have issues at some point. An example of this is Netflix, who use Amazon's cloud infrastructure service and may actually understand it better than Amazon. They take responsibility for their stack very seriously and because of this they are able to provide an amazingly resilient service at incredible scale.
Focusing solely on machines and operating systems does not scale and too many IT departments find it difficult to transcend this. Netflix are a good example here too. When you have tens of thousands of machines you are not going worry about keeping them all in sync or fixing machines that are having issues unless it is pandemic. Use virtualization, create golden images, if a machine is limping - shoot it. This is how Netflix operates. And to ensure that their sharp-shooters are always at the top of their game, Netflix uses tools like Chaos Monkey. They are purposely putting wolves amongst their sheep.
DevOps is a movement that aims to liberate IT from its legacy mindset. It aims to step back and take a look at the bigger picture. View business needs. Address the needs that span across the currently siloed Dev and Ops. This may be through fancy new orchestration and CI tools or be through organizational changes.
Tools and infrastructure are evolving so quickly that the best tool today will not be the best tool tomorrow. It is difficult to keep up.
Todd Underwood said that Operations Engineers, or rather Site Reliability Engineers, at Google are always working just outside of their own understanding. This is the best way to keep moving forward. From this, I take that if you fully understand the tools you are using, you are probably moving too slowly and falling behind. Just do not move so quickly that you jeopardize operational safety.
As long as we focus on "why", we can keep building our stack around that purpose instead of around our favorite naming convention, our favorite operating system and favorite toolset.
PaaS is currently the best tool for orchestrating the container layer above the cluster layer. It manages the sand in your jars. ActiveState's PaaS solution, Stackato, also provides a way to manage the human side of your infrastructure. By integrating with your LDAP server, providing organizational and social aspects you can see your applications as applications, rather than as infrastructure.
As Troy Topnik points out in a recent post IaaS is not required to run Stackato. Although an IaaS will make managing your cluster easier as you grow.
But will an IaaS help with managing your resources at the application level? No. Will it help identify wasted resources? No. Will it put you directly in touch with the owners of the applications? No. But PaaS will.
The reasons for waste in IT organizations are many. One reason is the way resources are provisioned for projects. The boundaries of a project are defined too far down the stack, even though a project may simply be defined by a series of processes, message queues and datastores. Another reason for waste is bad utilization of resources from the outset - not building the infrastructure where resources can be shared. A third reason for waste is lack of visibility into where waste is occurring.
Lack of visibility comes from not being able to see or reason about machines and projects. Machine responsibilities may be cataloged by IT, but will it be visible by all involved? Building the relationship between machines, processes and projects is still a documentation task without PaaS.
If redundant applications are not visible to the entire organization, then those accountable for announcing "time of death" will less likely make that call. Old projects which should be end-of-lifed will continue to over-consume their allocated resources - resources that were allocated with the hope of what the project may one day become.
Nobody likes waste. Resource consolidation through newer architectures and solutions like PaaS is a way to tackle this. PaaS brings more visibility to your IT infrastructure. It exposes resources at the application level. It spreads resources evenly across your cluster in a way that enables you to scale up and down with ease and utilizes all your hardware efficiently.
Source: ActiveState, originally published, here.
When you set off to build an app that will change the world, designing your system architecture to be reliable and scalable is important but the stark reality is that, for your MVP, you probably had a “need for speed” (of development). You didn’t know what all the axes were to scale your application, where your stress points would be, and what weird and wonderful ways your customers would use it down the road. In a world of zero-downtime services, landing the plane to figure it out is not an option. In his session at DevOps Summit, Andrew Miklas, CTO of PagerDuty, will share lessons learned ...
Oct. 24, 2014 09:00 PM EDT Reads: 1,265
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company's products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers and partners worldwide, across many industries including telecommunications, aerospace, software, fina...
Oct. 23, 2014 09:00 PM EDT Reads: 1,673
SYS-CON Events announced today that ElasticBox is holding a Hackathon at DevOps Summit, November 6 from 12 pm -4 pm at the Santa Clara Convention Center in Santa Clara, CA. You can enter as an individual or team of up to 10 developers. A New Star Is Born Every Month! All completed ElasticBoxes will then be sent to a judging panel - 12 winners will be featured on the ElasticBox website in 2015. All entrants will receive five full enterprise licenses for one year + ElasticBox headphones + ElasticBox T-shirt. Winners can also choose to interview with ElasticBox to join one of the fastest growi...
Oct. 22, 2014 01:00 PM EDT Reads: 1,599
SYS-CON Events announced today that Calm.io has been named “Bronze Sponsor” of DevOps Summit Silicon Valley, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Calm.io is a cloud orchestration platform for AWS, vCenter, OpenStack, or bare metal, that runs your CL tools puppet, Chef, shell, git, Jenkins, nagios, and will soon support New Relic and Docker. It can run hosted, or on premise and provides VM automation / expiry, self-service portals, audit, approvals, and budgeting.
Oct. 21, 2014 08:45 PM EDT Reads: 1,530
Blue Box has closed a $10 million Series B financing. The round was led by a strategic investor and included participation from prior investors including Voyager Capital and Founders Collective, as well as the Blue Box executive team. This round follows a $4.3 million Series A closed in December of 2012 and led by Voyager Capital. In May of this year, the company announced general availability of its private cloud as a service offering, Blue Box Cloud. Since that release, the company has demonstrated market validation through customer adoption, positive reviews from industry analysts and k...
Oct. 21, 2014 01:45 PM EDT Reads: 1,667
The speed of product development has increased massively in the past 10 years. At the same time our formal secure development and SDL methodologies have fallen behind. This forces product developers to choose between rapid release times and security. In his session at DevOps Summit, Michael Murray, Director of Cyber Security Consulting and Assessment at GE Healthcare, will examine the problems and present some solutions for moving security in to the DevOps lifecycle to ensure that we get fast AND secure.
Oct. 20, 2014 11:45 PM EDT Reads: 1,511
SYS-CON Events announced today that Zentera Systems, an industry visionary delivering hybrid-cloud management solutions, will exhibit at DevOps Summit Silicon Valley, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Zentera Systems, Inc.™ is a Silicon Valley based private company, providing a Cloud Federation Platform (CFP) built on a virtualization architecture with patent-pending technology to address virtual network, cloud firewall, data protection and transport automation within and across cloud domains. Zentera is solving the security ...
Oct. 20, 2014 10:00 PM EDT Reads: 1,437
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at DevOps Summit, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will di...
Oct. 20, 2014 08:00 PM EDT Reads: 1,710
High performing enterprise Software Quality Assurance (SQA) teams validate systems are ready for use – getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time – bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant?
Oct. 20, 2014 07:00 PM EDT Reads: 1,542
In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless of whether you are a startup or a mature enterprise, whether you are using PaaS, Micro Services, or Containerization, walk away with some practical tips where Ops can make a significant impact working with the development teams. Operational teams and functions are increasingly more important as the industry delivers so...
Oct. 20, 2014 06:00 PM EDT Reads: 1,481
Cloudwick, the leading big data DevOps service and solution provider to the Fortune 1000, announced Big Loop, its multi-vendor operations platform. Cloudwick Big Loop creates greater collaboration between Fortune 1000 IT staff, developers and their database management systems as well as big data vendors. This allows customers to comprehensively manage and oversee their entire infrastructure, which leads to more successful production cluster operations, and scale-out. Cloudwick Big Loop supports DataStax, the leading distributed database technology company, and big data vendors -- Cloudera, Hor...
Oct. 20, 2014 05:45 PM EDT Reads: 1,785
POMPTON LAKES, N.J., Oct. 17, 2014 /PRNewswire/ -- Kubisys announced today its namesake CID Platform, which automatically captures production environments and orchestrates the provisioning and deployment of replicas for development, testing and QA. The Kubisys CID Platform makes it easy for developers of mission critical multi-tier applications, such as Microsoft Dynamics AX, to follow DevOps practices for continuous delivery. The Kubisys platform delivers greater accuracy and agility than existing processes that rely on v...
Oct. 20, 2014 04:00 PM EDT Reads: 1,864
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies. This model makes use of Composable Enterprise framework put forward by Jonathan Murray of WMG.
Oct. 20, 2014 02:00 PM EDT Reads: 1,596
This is part of the ever-expanding "Microservices and PaaS" blog series covering the rapidly evolving use of microservices in modern cloud software projects. Parts I and II introduced microservices concepts and discussed patterns and practices being spearheaded by microservices pioneers, notably Netflix, who were represented at a recent microservices meetup that was the genesis of this series. Part III presented a list of challenges and pitfalls that adopters of microservices face. This list is formidable and somewhat daunting; pointing out the significant changes in mindset, organizational s...
Oct. 17, 2014 11:45 PM EDT Reads: 1,479
In their session at DevOps Summit, Stan Klimoff, CTO of Qubell, and Mike Becker, Senior Data Engineer for RingCentral, will share the lessons learned from implementing CI/CD pipeline on AWS for a customer analytics project powered by Cloudera Hadoop, HP Vertica and Tableau. Stan Klimoff is CTO of Qubell, the enterprise DevOps platform. Stan has more than a decade of experience building distributed systems for companies such as eBay, Cisco and Seagate. Qubell is helping enterprises to become more agile by providing a policy-driven platform for DevOps automation that provides self-service for d...
Oct. 17, 2014 08:00 PM EDT Reads: 1,788
The impact of DevOps in the cloud era is potentially profound. DevOps helps businesses deliver new features continuously, reduce cycle time and achieve sustained innovation by applying agile and lean principles to assist all stakeholders in an organization that develop, operate, or benefit from the business’ lifecycle. In his session at DevOps Summit, Prashanth Chandrasekar, General Manager at Rackspace, will exam whether / how companies can work with external DevOps specialists to achieve "DevOps elasticity" and DevOps expertise at scale while internally focusing on writing code / developme...
Oct. 17, 2014 01:00 PM EDT Reads: 1,827
I just recently wrote a blog about BOTs causing unwanted traffic on our servers. Right after I wrote this blog I was notified about yet another “interesting” and unusual load behavior on our download page which is used by customers to download latest product versions and updates. If you see such a load behavior you typically assume that you just released a new product version or maybe an update to our agents and many people are downloading it like crazy. Unfortunately that was not the case. The spike in traffic was caused by an implementation issue between our authentication service and our d...
Oct. 17, 2014 08:00 AM EDT Reads: 1,469
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An...
Oct. 16, 2014 11:45 PM EDT Reads: 1,375
SYS-CON Events announced today that Gigaom Research has been named "Media Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Ashar Baig, Research Director, Cloud, at Gigaom Research, will also lead a Power Panel on the topic "Choosing the Right Cloud Option." Gigaom Research provides timely, in-depth analysis of emerging technologies for individual and corporate subscribers. Gigaom Research's network of 200+ independent analysts provides new content daily that bridges the gap between break...
Oct. 16, 2014 10:00 PM EDT Reads: 1,519
Today, almost every company has a directory that needs to be managed. Spending valuable company time monitoring servers, provisioning and deprovisioning users, auditing, and assessing security concerns takes away from the core competency of the team – building product and delivering to customers quickly. DaaS takes on the burden of those tasks, and allows the team to focus on what they do best. In his session at DevOps Summit, Rajat Bahargava, Co-Founder, Chairman, and President & CEO of JumpCloud, will talk about what DaaS is, how it eases the pain caused by AD and LDAP, and why cloud-based d...
Oct. 16, 2014 08:00 PM EDT Reads: 1,267