Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Agile Computing, @DXWorldExpo, SDN Journal, @DevOpsSummit

@CloudExpo: Article

In-Memory Data Grids and Cloud Computing

The promise of the cloud is a reduction in total cost of ownership

The use of in-memory data grids (IMDGs) for scaling application performance has rapidly increased in recent years as firms have seen their application workloads explode. This trend runs across nearly every vertical market, touching online applications for financial services, ecommerce, travel, manufacturing, social media, mobile, and more. At the same time, many firms are also looking to leverage the use of cloud computing to meet the challenge of ever increasing workloads. One of the fundamental promises of the cloud is elastic, transparent, on-demand scalability -- a key capability that has become practical with the use of in-memory data grid technology. As such IMDGs are becoming a vital factor in the cloud, just as they have been for on-premise applications.

What makes IMDGs such a good fit with cloud computing? The promise of the cloud is a reduction in total cost of ownership. Part of that reduction comes from the ability to quickly provision and use new server capacity (without having to own the hardware). The essential synergy between IMDGs and the cloud derives from their common elasticity. IMDGs can scale out their memory-based storage and performance linearly as servers are added to the grid, and they can gracefully scale back when fewer servers are needed. IMDGs take full advantage of the cloud's ability to easily spin-up or remove servers. IMDGs enable cloud-hosted applications to be quickly and easily deployed on an elastic pool of cloud servers to deliver scalable performance, maintaining fast data access even as workloads increase. This is an ideal solution for fast-growing companies and for applications whose workloads create widely varying demands (like online flowers for Mother's Day, concert tickets, etc.). These companies no longer need to create space, power, and cooling for new hardware to meet these fluctuating workloads. Instead, with a few button clicks, they can start up an IMDG-enabled cloud architecture, which transparently meets their performance demands at a cost is solely based on usage.

Expanding on the promise of the cloud, some in-memory data grids can span both on-premise and cloud environments to provide seamless "cloud bursting" for handling high workloads. Let's say your e-commerce application stores shopping carts in an IMDG to give customers fast response times. To spur sales, your marketing group plans to run a special online sales event. Because projected traffic is expected to double during this event, additional web servers will be needed to handle the workload. Of course, maintaining fast response times as the workload increases is essential to success. By deploying your web app in the cloud and connecting it to your on-premise server farm with an IMDG, you can seamlessly double your traffic-handling capacity without interrupting current shopping activity on your site. You don't even need to make changes to your application. The combined deployments transparently work together to serve web traffic, and data freely flows between them within the IMDGs at both sites.

These synergies form a solid basis for making 2014 a watershed year for IMDGs in the cloud. But, there's another big trend that will further drive adoption. As the discussion around "Big Data" analysis heats up, the emerging combination of Big Data and cloud computing - cloud-based analytics - promises to fundamentally change the technology of data mining, machine learning and many other analytics use cases. In 2014, we expect to see the trend toward in-memory, predictive analytics sharply increase, and cloud computing will be a fundamental enabler of that trend.

IMDGs integrate memory-based data storage and computing to make real-time data analysis easily accessible to users and help extend a company's competitive edge. IMDGs automatically take full advantage of the cloud's elasticity to run analytics in parallel across cloud servers with lightning fast performance. Now it's possible to host a real-time analytics engine in the cloud and provide on-demand analytics to a wide range of users, from SaaS services for mobile devices to business simulations for corporate users. Or, maybe you want to spin-up servers with, say, a terabyte of memory, load the grid, run analytics across that data, and then release the resources. In an extreme example, chemistry researchers recently used Amazon Web Services to achieve a "petaflop" of computing power</a> running an analysis of 205,000 molecules for just one week. The elasticity of the cloud again makes the difference by providing the equivalent of a parallel processing supercomputer at your fingertips without the huge capital investment (it costs $33,000 total).

To sum-up, in 2014 we expect firms to adopt cloud computing and cloud-hosted IMDGs at a rapid rate, and the trends of in-memory computing and data analytics will converge to enable fast adoption of in-memory data grid technology in public, private, and hybrid cloud environments. Enterprises that take advantage of this convergence are expected to enjoy a quantum leap in the value of their data without the need to break their IT budgets.

More Stories By William Bain

Dr. William L. Bain is founder and CEO of ScaleOut Software, Inc. Bill has a Ph.D. in electrical engineering/parallel computing from Rice University, and he has worked at Bell Labs research, Intel, and Microsoft. Bill founded and ran three start-up companies prior to joining Microsoft. In the most recent company (Valence Research), he developed a distributed Web load-balancing software solution that was acquired by Microsoft and is now called Network Load Balanc¬ing within the Windows Server operating system. Dr. Bain holds several patents in computer architecture and distributed computing. As a member of the Seattle-based Alliance of Angels, Dr. Bain is actively involved in entrepreneurship and the angel community.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.