Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Dalibor Siroky, Pat Romanski, Stackify Blog

Related Topics: @DevOpsSummit, @CloudExpo, Apache

@DevOpsSummit: Blog Post

Apache Ignite v1.0 Release Candidate By @GridGain | @CloudExpo [#Cloud]

Apache Ignite will become for Fast Data what Hadoop is for Big Data

Today, we are proud to announce the first code drop of Apache Ignite, Apache Ignite v1.0 RC (Release Candidate), available for download on the Apache Ignite homepage. This is an exciting time for the project and the committers have been working hard since November to reach this milestone. We commend them all. Apache Ignite v1.0 RC not only carries forward the capabilities formerly available as the open source edition of the GridGain In-Memory Data Fabric, but now also boasts new ease-of-use and automation features, simplifying the deployment of an in-memory data fabric and allowing organizations to focus more on their core business and analysis.

We know that traditional disk-based storage infrastructure is too slow to address today's growing data demands for speed at volume, and more and more types of analytics are adding to those demands. A wider variety of applications require responses in real-time to meet users' needs, and they're cropping up faster and nearly everywhere. To address this trend, early last November, we announced that the GridGain In-Memory Data Fabric had been accepted into the Apache Incubator program under the name "Apache Ignite", with the goal of ensuring this robust and battle-tested software becomes the standard upon which in-memory computing's promise will be realized. To refresh your memory, the Apache Ignite In-Memory Data Fabric provides a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, and orders of magnitude faster than possible with traditional disk-based or flash technologies.

Today, we are proud to announce the first code drop of Apache Ignite, Apache Ignite v1.0 RC (Release Candidate), available for download on the Apache Ignite homepage. This is an exciting time for the project and the committers have been working hard since November to reach this milestone. We commend them all. Apache Ignite v1.0 RC not only carries forward the capabilities formerly available as the open source edition of the GridGain In-Memory Data Fabric, but now also boasts new ease-of-use and automation features, simplifying the deployment of an in-memory data fabric and allowing organizations to focus more on their core business and analysis. Apache Ignite v1.0 RC introduces the following new features:

JCache (JSR-107) support: JCache, a new standard for Java in-memory object caching, provides a seamless and easy-to-use API for interacting with Java in-memory caches. Apache Ignite caching implementation is based on JCache, and it adds functionality like ACID transactions, SQL queries, off-heap memory to avoid garbage collection (GC) pauses, custom eviction policies, and more. Since it is JCache-compliant, organizations can easily migrate to Apache Ignite from other JCache-compliant products such as Oracle Coherence, Software AG Terracotta or Hazelcast.

Auto-Loading of SQL data: This feature offers much simplified integration with different RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.) by automatically generating the application domain model based on database schema definition, and loading the data from the RDBMS. Moving from disk-based architectures to in-memory architectures can be tedious in terms of defining the domain model, query indexes or query fields. Apache Ignite streamlines the process and saves time by providing a simple utility that automatically reads the database schema, creates required in-memory indexes, and optionally generates the domain model in Java.

Dynamic cache creation: Apache Ignite provides for dynamic cache creation on the fly, enabling users to define configuration and cluster topology for the cache to be automatically started on the cluster. This simplifies the configuration process, removing the need for users to know all cache configurations in advance and removes the need to restart the entire cluster or update multiple configuration files on all the cluster members. This timesaving feature lets you define and deploy caches from a single client throughout runtime of an application.

While Apache Ignite offers a rich, highly performant data fabric for a broad range of applications ranging from transactional to analytical, it also offers a powerful and highly praised in-memory Hadoop accelerator that speeds existing Hadoop applications without requiring any code change.

According to Mike Matchett of The Taneja Group, "In memory is clearly a big thing. Lots of projects like Impala and Spark are tapping memory as a key resource. We expect to see Apache Ignite compared and contrasted with Spark a lot. But while Spark is a great new paradigm based on in-memory computing, it is essentially a new platform. Ignite will drop in rather seamlessly into existing Hadoop clusters and accelerate MR based applications to the point where it might not be necessary to jump over to Spark just for performance."

Mike continues: "Many of today's operational applications demand top performance, and at the same time as memory is getting cheaper and denser in today's servers, in-memory computing also continues becoming easier to adopt. This first version of Apache Ignite aims to make the migration from disk-based storage to an in-memory approach simple and straightforward. And with new online Dynamic Cache Creation feature, Apache Ignite starts to offer ‘software-defined' capabilities which should prove attractive to fast moving and highly agile applications and operations."

As we stated before, we believe Apache Ignite will become for Fast Data what Hadoop is for Big Data. Let the innovation continue!

More Stories By Nikita Ivanov

Nikita Ivanov is founder and CEO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies – the top Java in-memory computing platform starting every 10 seconds around the world today.

Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe’s largest system integrators in 1996.

He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

@DevOpsSummit Stories
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.