Welcome!

@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Pat Romanski, Karthick Viswanathan, Derek Weeks

Related Topics: @DevOpsSummit, Open Source Cloud

@DevOpsSummit: Blog Post

Getting Granular with @Docker By @PhilWhln | @DevOpsSummit [#DevOps]

The world of IT has changed. Applications are getting more granular and with structure of IT organizations is changing

Getting Granular with Microservices, PaaS, Twelve Factor Apps and Docker

There's fog coming to the world of IT. Some people are calling this "The Cloud."

This fog turns everything inside out. It will take your monolithic applications, the guts of which are contained within large bodies of code, and break them apart so that discrete units of functionality are contained within their own process, exposed through an interface for others to see. Complexity is moving from the inside of a few large processes to the outside of many smaller processes.

People are rightly scared when they see that this fog, this "Cloud," will soon engulf their IT organization. Some of those that has been exposed to the fog, have embraced it, have applied good design practices and seen its real benefits. These people are singing its praise and telling others, "Don't fear the fog, but be prepared for it."

Why are things moving in this direction? Why we are breaking down our applications into smaller and simpler units of discrete functionality, only to have to wire them back together again on the outside?

There are many reasons. The most cloudy answer is that it helps you scale functionality independently and redundantly across highly available cloud infrastructure. Also, smaller components are easier to reason about. Small efficient teams can focus on smaller components, rather than large teams, whose members have to learn a monolith code-base before they become useful.

Each smaller component can be written in the language best suited to it, with a back-end data-store equally suitable. I've seen a pure Java shop build a five line proxy in Node.js - when they moved in this direction - because it was the best tool for the job. Moving from Oracle, or even to Oracle, no longer becomes a multi-year project. It becomes a small project that only affects the teams where the move makes the most sense.

When we have more smaller moving parts, tooling needs to change - and it is. Automation is needed for provisioning machines, operating systems and the software stack to run the software that runs our business. It needs to be fast, consistent and it needs to scale. Filing tickets to provision a machine doesn't scale. Having a human decide where the instances of my application should run doesn't scale.

We now have the tooling to do this and these technologies are improving every single day. Public infrastructure and private solutions have greatly improved to what they were a year or two ago. In a year things will be different again. You have probably heard of Docker. 18 months ago, Docker didn't exist. Now it's everywhere and technology providers are racing to support its portability and interoperability. The ecosystem around Docker grew quickly and continues to flourish.

Cloud adoption is racing across enterprises like a fast moving fog.

The price of public cloud infrastructure is already dropping. Amazon Web Services, Google Compute Engine, Microsoft Azure and newer companies like Digital Ocean are in a race to the bottom, while the trust of these public services is on the rise. Yes, we fear the NSA and all those cyber-no-gooders, but we are starting to realize that no matter how skilled our Operations teams are, they just cannot compete. How long can your Operations team say that they are building more secure infrastructure than these public infrastructure services, when their pool of the smartest engineers on the planet is constantly growing?

This situation will only get worse as more companies start to use public infrastructure services. But there is hope on the horizon for private infrastructure. OpenStack is the open source alternative to these cloud infrastructure giants and it provides a glimmer of hope for those large enterprises, of which there are many, who do not want to sell their private IT soles just yet.

Although, even those that want to keep their infrastructure in-house do see the benefit of utilizing public infrastructure for certain workloads. Public infrastructure allows developers to rapidly experiment. For instance, a team could spin up a 1000 node Hadoop cluster in Brazil at the drop of a hat without having to build out a data-center. Your average IT department will struggle to compete with that.

All large enterprises will likely use public infrastructure in some way. When they do, they will likely be spread across several, if not all, of these providers. One to get the best price, two because different features offered will be more suitable for different projects and three because organizations are large and getting consensus on which cloud provider to bet the farm on is not always feasible.

For a long time we are going to see enterprise IT straddled across all these public and private infrastructure technologies. The private infrastructure will be straddled across legacy systems and private cloud solutions such as OpenStack. Things are going to get more complex before they get simpler.

So how do we build software to run across the tectonic plates of this public and private infrastructure? It's three parts. At a high-level, it's the way you architect your applications. Then there's the platform on which you build your applications. And at the lowest level is how you build those applications. Let's talk about architecture first...

Microservices Architecture
If you're a developer, you may have used the Twitter API, Google Maps API or any of the many APIs available out there. How long did you spend in a room with the developers of the Twitter API or Google API before you used them? Unless you are well connected, it is unlikely you will get any facetime with developers of those APIs, even if you hit a problem.

On the other end of the scale to Twitter and Google, there are many startups building their business around their own API. The API Hub, Mashape, lists over 10,000 API services you can choose from; with everything from health, commerce and number crunching to "Yoda Speak". The 10,000+ companies behind these APIs are all trying to monetize access to them. You would probably be able to talk to some of these folks; tell them what features you need and they might work with you to meet your needs. But will they allow your application to connect directly to their back-end database? Hell, no.

How about the programming language the API provider uses? Would you ask them? Would you care? Would you refuse to consume their API service because it was not written in Go? You wouldn't care. You would just look at the API provided, the SLA (service level agreement) and possibly the background of the team supporting it and start integrating with it.

Hopefully the API is versioned, so any changes upstream do not affect you and you can gracefully migrate your application to newer versions of the API.

Ok, so we've got a mental picture of us, the consumer of APIs, and these many APIs that we consume from - for which we care not of how they implement their them. Possibly we are providing the data we consume from these as another API. Maybe it's to our web front-end or maybe it's to other consumers. Maybe it's turtles all the way down.

Now imagine this is your organization. Small teams providing distinct components of your business. Each team can focus on choosing the best tools, the most appropriate data-store, whether it be relational or NoSQL and the most appropriate programming language. Each of these micro services provides an SLA and a versioned API. They continue to support previous versions and deprecate features gracefully. There is a roadmap of upcoming features with time-lines so consumers can plan accordingly. High-level design is just as important as ever, if not more so.

For the teams working on these microservices, things are much simpler. A smaller component with a distinct interface is much easier to build and maintain. They can concentrate on getting their component right rather than having to consider all the other moving parts of the system. They can have their own independant release cycle.

The complexity comes with what happens outside these running services. These are Operational concerns. For instance, where does it run? How do we scale it up? How do we deal with the many system failures we might encounter across machines and availability zones? How do consumers discover where this service is running - especially if it moves location due to failure. How do we handle latency between components and prevent a cascading catastrophe as each service's processes, threads and queues backup. This can happen quickly when latency suddenly goes from 2 milliseconds to 2 seconds. How do we handle errors in the system gracefully? What happens when a new version of service fails? Can we fall back to a previous version?

When we think long and hard about all the problems we might encounter, we want to distill it down and keep these concerns from overly polluting our now smaller simpler applications. We want each application to concentrate on the explicit job that we have assigned to it. This is where an application platform comes in.

Application Platform
Platform-as-a-Service, or PaaS, is a concept that is starting to gain widespread acceptance. The past few years has seen growth of the ecosystems around open-source projects like Cloud Foundry. You are probably familiar with the public Platform-as-a-Service, Heroku, which has pioneered a great deal of what we know of and expect from PaaS today. Containerization, such as Docker, goes hand-in-glove with PaaS. In fact, Docker was born from the lesser-known public PaaS, dotCloud.

The application platform provides a way for developers to quickly deploy their software on a foundation which is agnostic to the underlying infrastructure technology or service provider. For instance, a single cluster of a Cloud Foundry based PaaS, such as ActiveState's Stackato, can span internal hardware, OpenStack and multiple service providers such as Google, Amazon or Microsoft.

What should we expect from an application platform?

  • It should be language agnostic. As a developer I should be able to push any code of my choosing and have it running as an application in minutes, if not seconds.
  • It should be data-store agnostic. As a developer I should have a range of diverse data-stores to choose from.
  • It should be consistent. Every time I push my application code, it should build and deploy it in the same way, so that I can be sure it will work in development as it works for QA, on staging or in production.
  • It should be self-serve. As a developer I should be able to provision all the services I need and deploy my code without requiring to file a support ticket or talk with Operations engineers.
  • It should be resilient to partial underlying infrastructure failure. If a machine within the cluster is lost, or even an availability zone, we should recover quickly and gracefully to the levels set in the SLA we give our application consumers. There should be no down-time, but temporary throttling and graceful service degradation may be acceptable. This often involves running an application with some redundancy to handle bumps and applications should be designed to be ephemeral.
  • It should be extensible. It should provide a programmable interface, such as a REST API. I should be able to interact with it via my IDE. I should be able automate it. I should be able to integrate it with my Continuous Integration and Continuous Delivery pipelines.
  • Ideally the application platform would support the portability of and be inter-operable with Docker containers.

Building Applications
Let's assume we have the perfect architecture. Maybe it's a microservices architecture the likes of what Netflix runs, maybe it's something similar or different. No one size fits all. What is happening down in the weeds? Where do the applications run? How are things changing here when we move to the cloud?

Through the application platform I just described, we are giving developers more power. If our application platform can provide self-service, if it can free developers to choose the right programming languages, data-stores and the right technologies for the job, then we start to see rapid results and innovation.

If the domain of each application is smaller, then developers will have greater focus and will be able to understand and test it more easily. Small teams will be able to fully own these smaller components. Companies that have adopted this model often have the developers be the ones that carry the pagers. If an issue occurs in a small component owned by a small team, then that team has the best people to address the issue. Through the self-service tools they also have the power to rapidly deploy fixes. Through CI pipelines they would also be able to quickly see potential issues with any fixes.

An amazing thing happens when developers carry the pagers - the code quality goes up. Adrian Cockcroft, previously of Netflix, says this is analogous to car drivers having a 6-inch spike instead of airbag. If thinking about your code a little more during the day means not being woken up during the night, then you are more likely to do it. This is also similar to how code quality generally goes down when a development team first gets a QA team that will find all the bugs for them. It's someone else's concern. When there is nobody else to pass the buck to, and you own the code until it is decommissioned, you make sure it works well and will continue to work well.

How are the applications themselves changing?

Twelve Factor Apps
In order for applications to become scalable they must be ephemeral. This means that each instance of my application is equal to the other instances, no matter whether it just came on-line or has been running for the past two weeks. Equally it must have no effect on the overall system if one instance suddenly dies. This means no state. Application instances should be stateless and any persistent data should be offloaded to data services which are shared between all the instances of that application.

The application platform should handle the routing and load balancing across all healthy instances. The application platform should be able to wire-in the location and credentials of the data-services that the application instances will use.

This ephemeral nature of applications and how they are wired into the overall system are part of a manifesto written by Heroku, called "The Twelve-Factor App." I recommend everyone read this. It's 12 short pages and outlines what the perfect cloud application should look like.

And Docker?
Docker now runs through the cloud from development to production. It is the portable entity that allows developers to package their code and deploy it anywhere that runs Linux. It was born from PaaS and this is where it makes most sense.

Docker is simply a building block and does not provide the orchestration. But the portability and interoperability that it brings means that it is becoming the common-denominator of cloud technology solutions. This benefits the users of the cloud more than anyone.

Docker's original concept was based on the idea of running a single process in each container. This idea of small discrete units of operation has been skewed by people wanting to use them as virtual machines and running an entire operating system. The "can we do this?" continues to drive development in this area, but just because you can do something, does not mean you should. A Docker container can be tightly coupled to a single process and Docker can manage this well. As soon as we decouple this with our own layers of process management within the container, or daemonized processes, things get messy. It is better to keep things simple, utilize the application platform to do your orchestration and wiring for you.

Conclusion
The world of IT has changed. Applications are getting more granular and with this the culture and structure of IT organizations is changing. Design your architecture to take advantage of the cloud, because building monolithic applications no longer scales to the modern business demands. Monolithic applications cannot adapt quickly enough for the changing pace we are seeing.

Don't fear the fog, but be prepared for it.

The post Getting Granular with Microservices, PaaS, Twelve Factor Apps and Docker appeared first on ActiveState.

More Stories By Phil Whelan

Phil Whelan has been a software developer at ActiveState since early 2012 and has been involved in many layers of the Stackato product, from the JavaScript-based web console right through to the Cloud Controller API. He has been the lead developer on kato, the command-line tool for administering Stackato. His current role at ActiveState is Technology Evangelist.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud.
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on quality and value. Based in Bellevue, Washington, T-Mobile US provides services through its subsidiaries and operates its flagship brands, T-Mobile and MetroPCS. For more information, visit https://www.t-mobile.com.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In their session at 21st Cloud Expo, Jenny Hung, E2E Engineer Manager at Yahoo Gemini, Haoran Zhao, Software Engineer at Oath Gemini, and Lin Zhang, Software Engineer at Oath (Yahoo), will describe the technical challenges and the principles we followed to build a reliable and scalable test automation infrastructure across desktops, mobile apps, and mobile web platforms on the cloud. We also share some...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that saves you time and money and ultimately simplifies your life.
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying resources. Nirmata not only simplifies deployment and management of Kubernetes clusters but also facilitates delivery and operations of applications by continuously monitoring the application and infrastructure for changes, and auto-tuning the application based on pre-defined policies. Using Nirmata, enterprises can accelerate their journey towards becoming cloud-native.
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which qualified changes in software code or architecture are automatically deployed to production as soon as they are ready. Adding continuous deployment to your toolchain is the final step to providing push button deployment for your developers.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.
SYS-CON Events announced today that ECS Refining to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. With rapid advances in technology, the proliferation of consumer and enterprise electronics, and the exposure of unethical e-waste disposal methods, there is an increasing demand for responsible electronics recycling and reuse services. As a pioneer in the electronics recycling and IT asset disposition space for over 36 years, ECS provides a broad spectrum of solutions to a variety of markets, including recyclers, large enterprises, OEMs, retailers, and consumers.