Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Liz McMillan, Elizabeth White

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Choosing a #Serverless Architecture | @DevOpsSummit #DevOps #AI #Docker

In the future, some applications will be entirely serverless, especially if the price point is better than cloud-based instances

What to Look for When Choosing a Serverless Architecture
By Sunil Mavadia

Information Technology has advanced in different areas at different speeds. This has always been true, as people found workable solutions to the problems that faced them today, most often before there was a market for those solutions. While things like virtualization and eventually cloud computing grew slowly, programming languages sat at the same point for around a decade before the current round of new languages and approaches—Python, Ruby, Node, Swift—came about. These things seem unrelated, but there is a union of the two sets of changes that we are about to see blossom.

That union is serverless computing. The joining of easily deployable instances–ultimately in the form of containers–with the ubiquity of REST-based APIs and the new programming languages that made REST-based APIs an easy and viable option. These technologies combined have created a new and different form of computing. While much of it is familiar (we know how to do REST, and we know how to do microservices), some of it is not.

Learning from Node.js
My very first Node.js program was probably not a lot different from most people’s. I set up listening on a port and path, wrote some code to execute-and-exit, and then used a browser to hit the full URI. Once it did what I wanted it to do, of course I started playing with ways to take advantage of Node’s parallelism and other features that the language brought new or made easier to use.

That first app was not a REST API, it was a simple HTML response to a request. A dynamic web page that changed as I required. But combined with REST, Node becomes an endpoint. And still, it executes when a call comes in, and except for the part of the infrastructure listening to the port, it exits.

The Infrastructure Wall
But in order to get a simple Node application (or Python/Django, or Java/Spring, or RoR, or…) to run, there is network configuration, firewall configuration, OS configuration and hardening, storage configuration, if you’re using a Web App Firewall (you should be for web apps), then WAF configuration… DevOps makes all of this easier, for certain. But what if you didn’t need to do any of this at all?

A Serverless Architecture
That is precisely what a growing list of companies are proposing. What, they posit, if a developer could write code, deploy it, and have it there when needed, not doing anything when not needed, and be as scalable as required? What if the configuration of things like WAF was automated, and the code could be limited for accessibility to just the right subset of people/machines? What if deployment really was as simple as uploading a zip file, or writing code in an editor and hitting “deploy”? And finally, what if things like database and storage access came pre-configured so you could just include them?

The appeal for developers is obvious. To be able to write code and deploy it without the previously necessary (whether physical, virtual, or cloud) operations procedures is many a developer’s dream. But there are some real-world use cases that spring to mind also. The first is the bottleneck functionality during peak periods. Let us assume you have an online order system and that it suits your organization perfectly. But during peak sales periods (Black Friday, for example), address validation takes too long and constantly bogs the system down.

What if you could spin address validation out to a serverless function that would take an address and return whether it was valid or not. And the function could scale at a rate that allowed for practically unlimited scalability? (You do still have to worry about network bandwidth–sorry.) Suddenly, your bottleneck is less of a problem, and you’re only paying for the time that your validation function is actually using. Meaning the more deliveries you’re making, the more you pay, but when you’re not making a lot of deliveries, you’re not paying a lot.  You’re only paying for what you’re using.

What if all of this could be done on your site or in the cloud, and for a price that was affordable? The products out there today (including Microsoft Functions, AWS Lambda, and nanoscale.io) offer all the above.

It’s Not All Rainbows
But there is always a catch. Things you want to watch for when choosing a serverless architecture vendor are:

  1. Tool support. Support for your development tools, particularly your CI/CD/ARA tools, is critical. It does you far less good if you can’t integrate serverless into your existing Dev environment and your pipeline for that matter.
  2. Charges and fees. Never forget that these are businesses same as you. Their goal is to make money. Yours should be to make sure you understand how much of that money they make is coming out of your budget. All things on-demand have a bit of fudge factor because they depend on the vagaries of usage by an unpredictable customer base. But understand what your organization will be charged for and what rights to change those charges the vendor reserves.
  3. Security functionality. Divorcing a chunk of code from the core of the application and putting it out in the cloud can mean a hefty investment in securing just that one bit of code. Make certain your vendor of choice either offers help in this area or allows you to keep the code behind your firewall and WAF.
  4. Your overall architecture. While our example above is a good illustration of “low hanging fruit,” more complex cases can run into problems of data access, data security, cost concerns, and even uptime concerns. Make certain your team has taken a good look at what to make serverless and that it will be well served by this method of computing.

Building the Right Application
In the future, some applications will be entirely serverless, particularly if the price point works out to be better than cloud based instances. But even if it doesn’t, there will be cases where the lack of infrastructure configuration is worth the cost. Meanwhile, look to serverless as a solution to bottlenecks that are caused by individual functions, and see if they can increase your applications’ capacity without increasing your infrastructure budget by more than is necessary. In the end it’s about the right selection of services to achieve the goals, and serverless offers another tool in the IT pack to solve problems.

The post What to Look for When Choosing a Serverless Architecture appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@DevOpsSummit Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. A tracing tool like Jaeger analyzes what's happening as a transaction moves through a distributed system. Monitoring software like Prometheus captures time-series events for real-time alerting and other uses. Grafeas and Kritis provide security polic...
DevOpsSUMMIT at CloudEXPO will expand the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike. Recent research has shown that DevOps dramatically reduces development time, the amount of enterprise IT professionals put out fires, and support time generally. Time spent on infrastructure development is significantly increased, and DevOps practitioners report more software releases and higher quality. Sponsors of DevOpsSUMMIT at CloudEXPO will benefit from unmatched branding, profile building and lead generation opportunities.