Welcome!

@DevOpsSummit Authors: Liz McMillan, Derek Weeks, Yeshim Deniz, Pat Romanski, Automic Blog

Related Topics: @CloudExpo, Microservices Expo, @DevOpsSummit

@CloudExpo: Blog Post

Legacy to Cloud Transformation | @CloudExpo #Cloud #Microservices

The business benefit for large enterprise organizations migrating to the Cloud is the modernization of their legacy applications

Legacy to Cloud Transformation: From Monolith to Microservices
By Neil McEvoy

The most powerful business benefit for large enterprise organizations migrating to the Cloud is the modernization of their legacy applications. This presents the biggest hurdle to overcome to achieve their Digital Transformation goals.

Digital Banking
For example in Canada legacy woes contribute to a perception IT is just there as a maintenance function, not to add strategic value, meaning that they find themselves too busy to innovate.

As Gartner described Canadian CIO’s face a scenario where there is lower buy in to the value of technology and thus a perception it is more of an operational cost not a strategic enabler, the biggest consequence being a lack of investment in upgrades and modernization, vs ‘keeping the lights on’.

The Royal Bank of Canada’s CEO noted that the issue presents the most signficant of digital transformation challenges, far more so than the threat of FinTech startups:

“Does [the lack of regulation for fintechs] hurt us? No, regulation is not the problem. The biggest barrier to adapting is the incredible legacy systems,” McKay said, noting that many banks have systems that are essentially 50 years old.

Digital Government
Government is another sector struggling with this same challenge.

For example in the Government sector elderly systems like COBOL are still prevalent, indeed in the USA they account for 70% of IT spend, and cost the government nearly $40 billion a year to maintain. This is why they aren’t investing as much in new innovation-enabling technologies like Cloud as they might.

In Oct 15 the UK Authority web site reported that the National Audit Office said the public sector is still struggling to master and realize the potential of digital transformation, despite the citizen and cost benefits it’s known to deliver.

They also identified legacy applications as the root cause of this lack of progress in all of these areas, reporting that over £480 billion of government revenues were reliant on them highlighting the many risks this presents, most notably resistance to the new digital innovations governments are required to adopt to achieve new online services:

“The government’s ICT strategy, published in March 2011, recognized legacy ICT as a barrier to the rapid introduction of new policies and particularly the move to ‘digital by default’. Legacy ICT reduces the flexibility to improve public services, makes it harder to protect against evolving cyber threats and increases government’s reliance on long-term contracts with large ICT companies. It is also likely to increase the cost of operating public services by preventing higher levels of automation and hinder data sharing intended to prevent fraud and error.”

In their audit they review a sample of government department situations and their legacy application challenges – the DWP Pension Service, HMRC VAT Collection, NHS Prescription Payment Service and the OFTs Consumer Credit Licencing Service.

These scenarios feature a variety of aged technologies, some originating as far back as 1973 running on a mainframe computer. The HMRC identified in 2009 that their 600 systems were “complex, ageing and costly”, and the report highlights how expensive a burden this is: The VAT collection service costs £430 million per annum to operate, and the DWP’s Pension Payment service £385 million per annum. That’s almost a billion pounds a year just for two applications.

Different options for addressing the situations are explored – ‘No Change’, ‘Enhance and Maintain’ and ‘Replace’ approaches, detailed in these in-depth case studies.

Similarly the Canadian Government audit office also identified their estate of legacy applications presented considerable risks of revenue-collecting downtime, and also inhibited the development of modern, online systems.

Legacy IT – Risks
The report identified eight key risks of legacy IT:

  • Disruption to service continuity – Legacy ICT infrastructure or applications are prone to instability due to failing components, disrupting the overall service. Failure of the legacy ICT may be more difficult to rectify due to the complexity or shortage of components.
  • Higher security vulnerabilities – Older systems may be unsupported by their suppliers, meaning the software no longer receives bug fixes or patches that address security weaknesses. The system may not therefore be able to adapt to cyber threats.
  • Vendor lock-in – Legacy ICT systems are often bespoke and have developed more complexity over time to the extent that only the original supplier will have the knowledge to support them. For example OFT felt that only the original developer could maintain the application, due to its bespoke complexity and lack of documentation, consequently extending their outsourcing contract.
  • Skills shortages – The HMRC VAT system is facing a skills gap due to the age profile of the support staff and declining skills internally and with the supplier.

Inhibiting Business Transformation

The remaining four risks of legacy systems that were identified directly inhibits an agencies ability to achieve their Digital Transformation goals.

  • Manual workarounds – More manual processing can be required due to the lack of functionality within the system or its inability to interface with other systems. Examples of workarounds include performing detailed calculations outside the system on spreadsheets; re-entering data on to other systems or having to manually check for processing and input errors.
  • Limited adaptability – New business requirements may not be supported by the legacy ICT. These may include requirements such as the provision of digital channels, the provision of real-time information and not being able to process transactions in a new way.
  • Hidden costs – The true cost of operating the system may not be known. Workarounds to the system and the cost of the additional manual processes may not be recorded. By not having all the information available at the right time, legacy ICT may not be able to provide real-time performance information which could lead to poor decision-making.
  • Business change – Due to the complexity or the limited availability of the skills required, change may be difficult, lengthy to implement and costly. This makes it difficult for the business to be responsive and changes may have to be prioritised.

In short the difficulty of updating legacy applications prevents implementation of new digital government features. The report describes “Legacy ICT is harder to adapt to meet changing business needs. We found that where an organisation has replaced its legacy ICT system, adaptability has increased.”

For example:

“OFT commissioned an efficiency and effectiveness review in April 2010, which recommended the redesign of business processes to streamline consumer credit processing. While most changes were implemented, some could not be supported by the legacy ICT and therefore were not adopted.”

One of the approaches, ‘Enhance and Maintain’, is based on keeping the legacy application and creating new interfaces to it such as mobile or web access, described as “wrappers”. However this does not address the core limitations of the legacy technology.

For example although the VAT system has been considerably updated via this approach, it’s still not a fully digital service as customers are unable to view their accounts in real-time, and HMRC has found it challenging achieving a ‘whole customer’ view, as its customer data is stored across a number of legacy ICT systems.

Other key limitations include the ‘batch processing’ approach of older platforms.

“Business transformation, including the drive for digital transformation is proving challenging for departments when it involves legacy ICT. Many legacy systems require data to be processed as a sequence of batches that is incompatible with a fully real-time digital service. In the pension system, for example, online applications have to be manually re-entered into the main system by a DWP operator, as the website and the main legacy ICT system are not integrated. The approach of adding functionality through the addition of interfaces to the core legacy ICT is likely to be insufficient to achieve full digital transformation.”

Additional processes are required due to the limited adaptability of systems using batch processing. The VAT return error correction process is a typical example of such manual intervention. VAT returns submitted online are only partially validated and corrected as they are entered. Full validation, risk identification and correction can only be done after the overnight batch is run. At that stage errors are picked up by the error correction team and addressed manually. This is typical functionality for the technology design of that era. Validating, and identifying more errors, at the point of submission would lead to greater efficiencies.

HMRC had exception processes like this which represented 20% of costs.

Furthermore increased complexity caused by additional interfaces and connections with other systems makes routine changes to legacy ICT costly and protracted. The existing complexity of DWP’s pension legacy system means changes take up to 18 months from planning to deployment.

Cloud Transformation: Migration and Modernization
Migrating to the Cloud presents the potential to address these challenges, but not when the scope only achieves a ‘lift and shift‘ exercise, a migration-only exercise. It must also be combined with application modernization best practices, as CIO.com begins to touch on, achieving a full transformation.

A cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or business processes it is used for.

There may be a clear business for doing so, such as the hardware platform becoming obsolete; however the organisation overall won’t realise any additional benefits – there is no business transformation as part of this move.

Legacy modernization best practices can address these issues, delivering business benefits including:

  • Untangle and map legacy application complexities – Build a basis of understanding of existing application and data architectures to establish more intelligent IT planning concepts in line with business and technical demands. Developers with no experience of the legacy software can be enabled to implement changes in line with business needs.
  • Extend the life of legacy applications without the risks of greenfield COTS projects – Numerous reports highlight how a COTS (Commercial Off The Shelf) approach to modernization is very high risk with expensive failure rates.
  • Align user interfaces and back-end application and data models with modern business processes – Modernization can be used to achieve IT objectives such as SOA, Cloud migration and Web-enablement of applications.
  • Leverage new technologies and tools – The overarching benefit is the transformation of software that is now resistant to change and thus innovation, as the required skills have long since retired and/or the suppliers are no longer in business. By moving it to a modern software platform new tools and techniques like ‘DevOps’ can be implemented to speed the rates of innovation.

Architecture Driven Modernization
With senior executives potentially expecting broader strategic capabilities as a result of a move to the cloud, it’s therefore important that clarifying this scope is the very first step in planning a cloud migration, and the OMG’s Architecture Driven Modernization (ADM) methodology is ideal for this purpose (intro white paper: Transforming the Enterprise).

As the ADM ‘horseshoe’ model articulates, and this Carnegie Mellon article shows, a migration project can be considered with three distinct tiers of scope possible, increasing the size and length of the project with an increasing level of associated business benefit.

1) (T)echnical Architecture, for technical reasons the underlying IT pieces are moved around but the software itself or the business model doesn’t change , 2) re-engineering the software architecture (A)pplication Architecture, through to 3) a full reinvention of the whole organization and business model (B)usiness Architecture.

Moving to Cloud can actually represent activity on all three fronts:

  1. (T) Virtualizing the platform to simply improve the underlying hardware usage. This begins at a technical migration, meaning the application is migrated ‘as is’ to a new hardware infrastructure service without modification.
  2. (A) Application Modernization, from simple re-writes to make use of native Cloud services such as AWS auto-scaling, through to wholesale transformation, such as converting COBOL code to Java. It can even enable a shift from a procedural software development method to an object oriented one.
  3. (B) Business model transformation – Changing business processes to a new operating model that best exploits these new capabilities.

As the horseshoe describes, these increases in scope mean a larger project that takes longer, because each is delivering a larger scope of business benefits, impacting a larger group of stakeholders and requiring a larger business transformation exercise, such as:

  • Untangle and map legacy application complexities – Build a basis of understanding of existing application and data architectures to establish more intelligent IT planning concepts in line with business and technical demands. Developers with no experience of the legacy software can be enabled to implement changes in line with business needs.
  • Extend the life of legacy applications without the risks of greenfield COTS projects – Numerous reports highlight how a COTS (Commercial Off The Shelf) approach to modernization is very high risk with expensive failure rates.
  • Align user interfaces and back-end application and data models with modern business processes – Modernization can be used to achieve IT objectives such as SOA, Cloud migration and Web-enablement of applications.
  • Leverage new technologies and tools – The overarching benefit is the transformation of software that is now resistant to change and thus innovation, as the required skills have long since retired and/or the suppliers are no longer in business. By moving it to a modern software platform new tools and techniques like ‘DevOps’ can be implemented to speed the rates of innovation.

Modernization would enable government agencies to eliminate unnecessary, non standard and obsolete technologies, a huge cost they endure, and financial benefits would expand even further when business process improvements are also acheived.

A Standish Group study found that less than 30% of the code in a given application contains business logic, meaning that the bulk of the costs are tied up purely in maintaining the proprietary hardware, and an IBM Systems Journal reported that as much as 60-80% of the functionality in application silos may be redundant or duplicated in other silos. All of these inefficiencies can be flushed out and eliminated by consolidation through a fully scoped Cloud Transformation project.

From Monolith to Microservices
Modernization would enable government agencies to eliminate unnecessary, non standard and obsolete technologies, a huge cost they endure, but more importantly would enable them to break “innovation gridlock”.

Breaking innovation gridlock
Exploring the nature of these benefits can help specify exactly what business executives are hoping to gain by moving to the cloud, and headlined by this theme of “breaking innovation gridlock”, described in this whitepaper from HP.

Although moving to IaaS can deliver benefits such as elastic capacity and utility pricing for infrastructure level components, this isn’t really of strategic value to most large organisations as they aren’t constrained in these areas.

Instead where the major business value will come from is modernising this legacy environment, transforming the core enterprise applications to new cloud-centric approaches so that innovation gridlock is broken and a faster cycle of development throughput is achieved.

A variety of tools are available that can automate the process of transforming legacy code like COBOL into their modern equivalents on Java and .net, meaning they can be re-deployed to private or public Cloud services and most importantly, then much more easily modified by software developers, setting the scene for an agile Enterprise DevOps culture and faster change cycle achieved through Continuous Deployment practices.

Furthermore leading edge Cloud architecture principles can also be utilized, such as ‘Microservices’. This means breaking up large monolith software, like mainframe systems, into an array of small self-contained services making it even easier to implement change at a faster pace.

Microservices Transformation
A microservices software architecture is the pinnacle of Cloud Native computing, and is relatively simple to understand when considering greenfield projects, but for most enterprise organizations it quickly brings them back around to the topic of legacy modernization, requiring a much more complex challenge of how to adapt their existing systems to this new approach.

InfoQ offers a great series of articles on the topic. That poor old monolith, you can migrate it, transform it, decompose it, break it, smash it, or just skip it.

This presentation from Linkedin offers a detailed case study, describing their approach for exactly this scenario – From a Monolith to Microservices + REST:

This describes:

  • A legacy estate of Java, Servlets, JSP and Oracle databases.
  • A need to support fast release iterations as far back as 2010, which ran into the core challenges associated with monolith software: Test failures, rollback difficulties and complex orchestration and dependencies between services.
  • So they broke apart the codebase, adopted Continuous Delivery practices and devolved controls, implementing a decentralized code base.
  • The use of Java RPC meant a proliferation of APIs made backwards compatibility a big problem, a situation they addressed by moving to Rest.li, a REST + JSON framework, key components from the Netflix suite – Apache Zookeeper for dynamic service discovery, and DECO for URN resolution to explore data graphs.

This combination formed their particular ‘Microservices Recipe’, and when you consider the role social graphs play across the Linkedin environment, how our business contacts are inter-connected and we dynamically explore our way through them, you can see how it would be an ideal design for this type of web site.

Others offer very practical permutations. For example in this article Flickr describe how you can utilize Github to operate a ‘Microservices Store’.

“Some of the products that we work with at Yahoo have a very granular architecture with hundreds of micro-services working together. For scenarios like this, it’s convenient to store configurations for all services in a single repository. It greatly reduces the overhead of maintaining multiple repositories. We support this use case by having multiple top-level directories, each holding configurations for one service only.”

This is a great idea when you consider Github can provide the foundation for a complete DevOps toolchain, augmented in many ways such as adding apps to support Agile practices.

Similarly Sensedia propose a recipe for Legacy Modernization that defines how microservices can be utilized as an API enablement strategy.

Chandra Rajasekharaiah, Enterprise Solutions Architect at Macy’s, published this excellent deep dive analysis of the Monolith to Microservices transformation and the software engineering challenges it presents, and Anil Madan, VP of Engineering at Intuit also describes the same journey encompassing a broader perspective of platforms and organizations.

Finally on this note and to close the loop back to Architecture Driven Modernization this OMG presentation from Dr. Giovanni Traverso of Huawei is highly recommended.

This describes the process within an overall context of Omnichannel Digital Transformation and the role Business Architecture can play in planning and managing this exercise.

Specifically on slide 15 Giovanni highlights how to ‘Preserve legacy investments with an incremental capability approach through microservices on PaaS’, defining the BA framework for the approach that Sensidia described.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

@DevOpsSummit Stories
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He explained how automation, orchestration and governance are fundamental to managing today's hybrid cloud environments and are critical for digital businesses to deliver services faster, with better user experience and higher quality, all while saving money.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone innovative products that help customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business and personal computing needs.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol, we have been able to solve many of these problems at the communication layer. This makes it possible to create rich application experiences and support use-cases such as mobile-to-mobile communication and large file transfers that would be difficult or cost-prohibitive with traditional networking.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams. In his session at 22nd Cloud Expo | DXWorld Expo, Daniel Jones, CTO of EngineerBetter, will answer: How can we improve willpower and decrease technical debt? Is the present bias real? How can we turn it to our advantage? Can you increase a team’s effective IQ? How do DevOps & Product Teams increase empathy, and what impact does empathy have on productivity?
You know you need the cloud, but you're hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You're looking at private cloud solutions based on hyperconverged infrastructure, but you're concerned with the limits inherent in those technologies. What do you do?
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close their feedback loops to drive continuous improvement.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value. Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS. For more information, visit https://www.t-mobile.com.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.