Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski

Related Topics: @CloudExpo, Cloud Security, @DXWorldExpo, @ThingsExpo, @DevOpsSummit

@CloudExpo: Article

Security and #MachineLearning | @CloudExpo #ML #AI #DL #CyberSecurity

For large enterprise organizations, it can be next-to-impossible to identify attacks and act to mitigate them in good time

Machine Learning May Be the Solution to Enterprise Security Woes
By Karl Zimmerman

For large enterprise organizations, it can be next-to-impossible to identify attacks and act to mitigate them in good time. That's one of the reasons executives often discover security breaches when an external researcher - or worse, a journalist - gets in touch to ask why hundreds of millions of logins for their company's services are freely available on hacker forums.

The huge volume of incoming connections, the heterogeneity of services, and the desire to avoid false positives leave enterprise security teams in a difficult spot. Finding potential security breaches is like finding a tiny needle in a very large haystack - monitoring millions of connections over thousands of servers is not something that can be managed by a team of humans.

Enterprise security is often preventative: we build a system that - we hope - reduces security risks as much as possible and deploy simple pattern matching intrusion detection systems, crossing our fingers and hoping nothing gets through.

It's not that we lack data about attacks; if fact, we have too much of it. What we lack is an intelligent system that can analyze huge volumes of data and extract actionable intelligence about security threats without a an overwhelming proportion of false positives. If the signal-to-noise ratio is too low, all we've done is to replace a huge haystack with a slightly smaller one.

One possible solution, as you might have guessed, is machine learning. Machine learning algorithms, trained on the characteristics of particular networks, are likely to be far more successful at identifying real threats and notifying the right people.

That's the basic idea behind tools like Apache Spot, an advanced threat detection system that uses machine learning to "analyze billions of events in order to detect unknown threats, insider threats, and gain a new level of visibility into the network."

Spot - which runs on top of Hadoop - uses a variety of techniques, including machine learning, whitelisting, and noise filtering to monitor data from network traffic, filter bad traffic from good, and generate a shortlist of potential security threats.

Spot uses an open data model for threats, making it relatively easy to integrate the data it produces with existing tools and to collaborate with other organizations.

Apache Spot was recently open sourced by Intel and Cloudera, and accepted as an Apache project. It was originally an Intel project called Open Network Insight (ONI). A number of other large organizations have been contributing to Spot since it was open sourced. The hope is that an open source project using a common data model will gain traction in enterprise organizations, who can collaborate to help reduce the devastating, and expensive, impact of security breaches.

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder of Crucial Point and publisher of CTOvision.com

@DevOpsSummit Stories
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO Silicon Valley 2019 will cover all of these tools, with the most comprehensive program and with 222 rockstar speakers throughout our industry presenting 22 Keynotes and General Sessions, 250 Breakout Sessions along 10 Tracks, as well as our signature Power Panels. Our Expo Floor will bring together the leading global 200 companies throughout the world of Cloud Computing, DevOps, IoT, Smart Cities, FinTech, Digital Transformation, and all they entail. As your enterprise creates a vision and strategy that enables you to create your own unique, long-term success, learning about all the technologies involved is essential. Companies today not only form multi-cloud and hybrid cloud architectures, but create them with built-in cognitive capabilities.
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...