Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Carmen Gonzalez, Dana Gardner

Blog Feed Post

Plexxi Pulse – Vote for OpenStack

The OpenStack Summit session voting is officially open, and Plexxi has two sessions in the running. Check out sessions from Nils Swart, The Future of OpenStack Networking, and Derick Winkworth, Group Policies for Neutron and evolving the abstraction model to merge with OpenDaylight, and get your votes in. In our video of the week, Dan Backman explains how the Plexxi Pod Switch Interconnect has increased the size of our product portfolio and looks at the differences between the switching platforms. Check out our video of the week and a few of my reads in the Plexxi Pulse – enjoy!

Eric Krapf, contributor to No Jitter, discusses how the communications industry is embracing SDN more and more as evident by Microsoft and HP’s use of a Lync API that can connect the communications servers with the controllers in an SDN architecture. This article has a great discussion, even though it’s not surprising that SDN is relevant in communications. If you think of communications as just another application of the network, then the idea that SDN will enable app-network exchanges is a natural extension of the technology. The issue is that people don’t frequently think of SDN as enabling app-network collaboration. It has gotten a fairly narrow definition around controllers and OpenFlow, which misses the point of abstractions and workload delegation. This article provides a very practical example of what can be done and highlights how SDN doesn’t need another 3 years to make an impact.

Jude Chao, editor at Enterprise Networking Planet, says more mergers and acquisitions will occur in the networking industry in 2014 according to a report released by PricewaterhouseCoopers (PwC) this week. Jude says SDN is a major factor driving this industry shift. I agree with the underlying premise – that there will be consolidation. A small number of networking hardware and software players will get acquired and several startups will fail to pick up traction and lose out on subsequent rounds of funding. I also believe a few smaller vendors will break out as independent players. Interestingly, the action will be in the periphery as well. Analytics, monitoring tools, data collection, DevOps, and even point solutions for niche applications will do well, and a round of change for the VARs will occur. Deploying SDN will mean a shift in business models, and not all VARs will make the leap. Much of this industry shift is not necessarily tied to commodity hardware, because the underlying being hardware cheap is not that big of a driver for industry consolidation. Differentiation already exists in the software today, even if pricing favors hardware.

The Register’s Jack Clark discusses the “seismic shifts” in networking and focuses on a quote from AT&T’s head of technology and network operations, John Donovan, in an interview with the Wall Street Journal, saying that the telco’s Supplier Domain Program 2.0 saves money. The networking industry has had good margins for a long time. This isn’t because of the hardware being intelligent or not. It’s because there hasn’t been much competition. The thousands of features that get deployed mean that the number of functionally equivalent devices for a particular spot in the network is small. With little competition, pricing stays high. Things like SDN are important for two reasons: first, they reset the architecture to some extent, which reduces the power of all those legacy features, and second, it helps automate workflow. The first point increases competition and drives prices down. The second addresses the bigger cost issue: managing all the devices. Ultimately, Cisco will drop their prices as competition heats up. They will be a player in the future of AT&T, just maybe not to the same extent they are today. But the real battle is going to be over long-term OpEx reduction. Merely making a cheaper switch doesn’t address that. If AT&T just wanted the same network they have today at lower prices, they would put pricing pressure on Cisco. This is about something much larger.

Blogger Ethan Banks contributed an article in Network Computing about how Ethernet switches and the purchasing process have changed in the last few years. He says buyers today “must learn a variety of technical nuances that set switches apart from one another, match those capabilities to their organization’s needs, and then move ahead to a purchase.” After reading this post, I wonder what the role of off-box capabilities will be in Ethernet switch selection in the future. SDN is about workflow automation. People interested in that will also key in on things like orchestration and DevOps. It could be that on-box support for what ultimately ends up being off-box functionality will matter more. I only mention this because I suspect that people will need to broaden their selection criteria beyond the box to include things like Puppet or Chef integration or even OpenDaylight support. It will take a confusing process and make it potentially even tougher in the short term. Those customers with a more solid grasp of current and future strategy will be in a better position to make these types of decisions.

ReadWrite contributor Jonathan Crane explains how the IT department will make important strides toward driving innovation and growth in 2014. Jonathan analyzes a recent Gartner that predicted numerous developments that will greatly impact the IT function across mobile device management, hybrid cloud integration and SDN. One of the things that becomes necessary in an infrastructure environment that is orchestrate as a whole and in support of the applications is the expression of application requirements in application terms. Basically, to operationalize things, someone has to be able to capture what is important across the infrastructure. This cannot be specified in networking language or compute language. It has to be expressed relative to the application. The various infrastructure systems then need to translate the requirements into underlying behavior. I mention this because someone has to own the application abstraction. That would seem to fit with your definition of the OM. And then the OM would translate (or facilitate the translation of) the application requirements into underlying configuration primitives. This obviously has to be done through data models and APIs; a manual translation would leave us where we are today. The question people need to be asking then is: who is defining the abstractions? And what tools do I need to use them? This is where open source projects like OpenStack and OpenDaylight come into play. Anyone who is in an OM role (or wants to be) needs to be looking at these projects very closely to understand how to intersect their IT operations with the availability of management frameworks and controller architectures.

Mitch Wagner at Light Reading says according to a Forrester analyst, Cisco customers can stop purchasing the company’s switches and Cisco will still prosper. There will always be people who predict the demise of the incumbent. That might be hyperbole, but there will certainly be headwinds. I don’t think Cisco is incapable of executing against an SDN strategy. They have proven they can develop products, and when in doubt, they have mastered the strategic acquisition. SDN, however, is a new architecture. The new architecture reduces the need for the tomes of legacy features that have made it exceedingly difficult to get off the Cisco drug. With a new architecture, you get a more level playing field with lower barriers to entry. It’s the increased competition that will whittle away share. Will it be 20 or 30 points? Probably not, but you could see a significant share movement over the next 3-5 years.

Tom Hollingsworth, the Networking Nerd, says SDN vendors are creating an event horizon, which is a boundary beyond which events no longer affect observers or the point of no return for things falling into a black hole. If SDN enables bidirectional communication between the apps and the network, it stands to reason that you would begin to architect each of them differently. Obviously you must start with making it possible; no one will change anything if there is no support for it, but you create applications that take advantage of network information. Imagine massive data replication jobs. If they are not time critical, you could schedule them and create pipes across the network. You could serve content from caches that were less congested. You could do things like variable bit rate for mobile connections that are shifting from 3G to Edge and back to LTE on a train ride. Ultimately, I agree with the premise of this post. I don’t think the future is overlays that are completely agnostic to the underlying network. I think there will be a desire to pin the overlays to the physical infrastructure and allow for the dynamic optimization of the physical transport to suit whatever is happening on the overlay.

 

The post Plexxi Pulse – Vote for OpenStack appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to monitor and adjust functions like performance, capacity, caching, security, optimization, uptime and service levels; identify trends or patterns to forecast future requirements; detect problems before they result in failures or downtime; and convert insight into actions like changing policies, storage tiers, or DR strategies.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development process, accelerate application delivery times, and ensure that developers will become heroes (not bottlenecks) in the IoT revolution.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no idea how to get a proper answer.
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @DevOpsSummit 19th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will explore this emerging use of Big Data generated by the digital business to complete the DevOps feedback loop, and inform operational and application decisions.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud using both containers and VMs.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of SolidFire, discussed how to leverage this concept to seize on the creativity and business agility to make it real.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and on the other side, organizations that will find themselves as roadkill on the technology highway.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of the Progress Corticon and Rollbase businesses, discussed and provided a deep understanding of the low-code application platforms that address these concerns.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" programs available to startups and innovators.
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how Docker and Kubernetes reduce software delivery cycle times, drive automation, and increase efficiency How other organizations are using DevOps + containers and how to replicate their success
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to deliver.
"We are a modern development application platform and we have a suite of products that allow you to application release automation, we do version control, and we do application life cycle management," explained Flint Brenton, CEO of CollabNet, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MySQL cluster as a Kubernetes application.
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organizations must focus on what is most relevant to deliver value, reduce IT complexity, create more repeatable agile-based processes and leverage increasingly secure and stable, cloud-based infrastructure platforms.
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how developers and operators work together to streamline cohesive systems.