Welcome!

@DevOpsSummit Authors: Elizabeth White, Zakia Bouachraoui, Pat Romanski, Liz McMillan, AppDynamics Blog

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

DevOps Is the Future of SIAM | @CloudExpo #DevOps #IoT #Microservices

Enterprises that engage in multisourced IT delivery need to re-evaluate their contracts

Enterprises with internally sourced IT operations typically struggle with typical tensions associated with siloed application and infrastructure organizations. They are characterized by finger pointing and an inability to restore operational capabilities under complex conditions that span both application and infrastructure configurations. These tensions often are used to characterize the need for a DevOps movement, which focuses on organizational, process and cultural changes needed to bring about a more fluid IT delivery that embodies higher quality and greater overall agility.

Large- and mid-sized enterprises that have multi-sourced IT environments also struggle with these same tensions, but they are magnified by the fact that IT outsourcing contracts written in the past 15 years barely scratch the surface for how vendors should work together in the face of failures or broad-sweeping changes. At least for in-sourced IT groups it is understood that any impact to business operations could very well be met with severe penalties including job loss. In the case of outsourced IT, as long as the vendor is operating within contractual boundaries and meeting service levels agreements, penalization becomes much more difficult.

Hence, practicing DevOps in multi-sourced IT environments should be a fundamental driver of Service Integration and Management (SIAM) activities. Yet, it seems that the majority of SIAM implementations focus on continuance of traditional tower-based (read siloed) approaches to IT delivery highlighted by managed communications handled through controlled gateways characterized by a loosely-federated group of ticketing systems and limited collaboration across the towers. In other words, the exact opposite of what DevOps prescribes as the cure to limited agility and low-quality delivery within IT.

In a fully-realized DevOps multi-sourced IT environment service levels should encompass responsibilities to participate in “cross-tower” activities oriented toward restoration of services as well as planning and deployment of new services. For example, if Vendor A is responsible for application development and Vendor B is responsible for infrastructure then Vendor B should actively participate in design and planning of the application with Vendor A exactly as would be expected of in-sourced IT departments delivering these same responsibilities.

There is some thinking that as Vendor B starts to deliver infrastructure-as-a-service that continued non-integration of towers will become less problematic. After all, if infrastructure is sourced from Amazon or Microsoft, then it is assumed that these vendors would not be a responsible party in the activities associated with failure or insufficient capacity.

This thinking is flawed in situations where a vendor is providing privately-hosted IaaS as part of an outsourcing agreement unless the expectation is that all responsibility for ensuring availability, security and other –ilities for the application are being pushed left onto Vendor A. Yet, even in this case, Vendor B could not deliver an economical privately-hosted IaaS solution without limiting oversubscription on capacity, which would require that they are an active participant in the capacity requirements in advance of demand.

Moreover, in the case of private IaaS, the enterprise would want some assurances in the face of a failure of the infrastructure, which has occurred to all major IaaS vendors, which would be too high of a risk for an outsourcing vendor to sign up to without pricing the service at a point that would make it an untenable choice.

Physical infrastructure is only one example to be considered as many organizations also multisource database and middleware operations to one vendor, Vendor C, and application delivery to other vendors. Here again, we have seen how Vendor C limits agility of the application delivery by forcing activities to go through the SIAM layer to addressed. For example, the application delivery team may want to create a test environment that would require Vendor C to configure and setup application infrastructure. Since most SIAMs focus more heavily on operations than development and test, these requests are given lower priority, and thus, the process of making the environment available could take weeks instead of hours. This has a dramatic impact on the enterprise’s ability to leverage continuous delivery as a competitive differentiator.

In conclusion, SIAM that is not aligned with DevOps philosophies and continuous delivery as passé before they even get implemented. Enterprises that engage in multisourced IT delivery need to re-evaluate their contracts to ensure that there are identified responsibilities as part of their service levels for working with other vendors to ensure continuous delivery metrics established at the SIAM level. This is going to require a whole new level of collaboration and communications that go far beyond ticket-based systems and that ultimately limit the potential to develop strong multisourced collaborative efforts.

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.