Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Dalibor Siroky, Pat Romanski, Stackify Blog

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Open Source Cloud, Containers Expo Blog, Agile Computing, @CloudExpo, Apache, @DXWorldExpo, @ThingsExpo

@DevOpsSummit: Article

Apache #Hadoop and #BigData Standards | @CloudExpo #IoT #M2M #BI #ML

The platform’s penetration into enterprises has not lived up to Hadoop’s game-changing business potential

Making Apache Hadoop Less Retro: Bringing Standards to Big Data

Ten short years ago, Apache Hadoop was just a small project deployed on a few machines at Yahoo and within a few years, it had truly become the backbone of Yahoo's data infrastructure. Additionally, the current Apache Hadoop market is forecasted to surpass $16 billion by 2020.

This might lead you to believe that Apache Hadoop is currently the backbone of data infrastructures for all enterprises; however, widespread enterprise adoption has been shockingly low.

While the platform is a key technology for gaining business insights from organizational Big Data, its penetration into enterprises has not lived up to Hadoop's game-changing business potential. In fact, according to Gartner, "Despite considerable hype and reported successes for early adopters, 54 percent of survey respondents report no plans to invest [in Hadoop] at this time, while only 18 percent have plans to invest in Hadoop over the next two years," said Nick Heudecker, research director at Gartner.

These findings demonstrate that although the open source platform may be proven and popular among seasoned developers who require a technology that can power large, complex applications, its fragmented ecosystem has caused enterprises difficulty extracting value from Apache Hadoop investments.

Another glaring barrier to adoption is the rapid and fragmented growth happening with Apache Hadoop components and its platform distribution, ultimately slowing Big Data ecosystem development and stunting enterprise implementation.

For legacy companies, platforms like Apache Hadoop seem daunting and risky. If these enterprises aren't able to initially identify the baseline business value they stand to gain from a technology, they are unlikely to invest - and this is where the value of industry standards comes into play.

Increasing adoption of Apache Hadoop, in my opinion, will require platform distributions to stop asking legacy corporations to technologically resemble Amazon, Twitter or Netflix. Through compatibility across platform distribution and application offerings for management and integration, widespread industry interoperability standards would allow Big Data application and solution providers to offer enterprises a guaranteed and official bare-minimum functionality and interoperability for their Apache Hadoop investments.

Additionally, this baseline of technological expectation will also benefit companies looking to differentiate their offerings. Similarly, standards within this open source-based Big Data technology will enable application developers and enterprises to more easily build data-driven applications - including standardizing the commodity work of the components of an Apache Hadoop platform distribution to spur the creation of more applications, which boosts the entire ecosystem.

A real world illustration of standardization in practice occurs within the container shipping industry, which was able to grow significantly once universal guidelines were implemented. When a formal shipping container standard was implemented by the International Standards Organization (ISO), to ensure the safe and efficient transport of containers, its significant impact increased trade more than 790 percent over 20 years - an incredible case for the unification and optimization of an entire ecosystem to ensure its longevity.

To help today's growing enterprise buyer looking to harness the estimated 4ZB of data the world is generating, the open data community will need to work together to foster the support of standardization across Apache Hadoop to ensure confidence from new adopters in their investment - regardless of the industry they serve.

From platform distributions, to application and solution providers and system integrators, known standards in which to operate will not only help to sustain this piece of the Big Data ecosystem pie, but it will define how these pieces interoperate and integrate more simply for the benefit of the ever-important enterprise.

More Stories By John Mertic

John Mertic is Director of Program Management for ODPi and Open Mainframe Project at The Linux Foundation. Previously, he was director of business development software alliances at Bitnami. He comes from a PHP and Open Source background, being a developer, evangelist, and partnership leader at SugarCRM, board member at OW2, president of OpenSocial, and frequent conference speaker around the world. As an avid writer, he has published articles on IBM Developerworks, Apple Developer Connection, and PHP Architect, and authored the book The Definitive Guide to SugarCRM: Better Business Applications and the book Building on SugarCRM.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.