|By Marten Terpstra||
|March 28, 2014 09:30 AM EDT||
Perhaps not as popular as its brothers and sisters I, P and S, Network-As-A-Service or NaaS has slowly started to appear in industry press, articles and presentations. While sometimes associated with a hypervisor based overlay solution, its definition is not very clear, which is not at all surprising. Our industry does not do too well in defining new terms. I ran across this presentation from Usenix 2012 that details a NaaS solution that adds a software forwarding engine to switches and routers that provide specific services for some well known cloud computing workloads.
I have some serious reservations about the specific implementation of the network services provided in this presentation, but the overall thoughts of specific network services delivered to applications and workloads resonates well with me. Unless this is your first visit to our blog, your reaction is probably “duh, this is what Affinity Networking is all about”. Of course it is.
The network has traditionally been very static and simplistic in its offerings. The vast majority of networks runs with an extremely small set of network services. Find me a network that uses more than some basic QoS based on queueing strategies, IP Multicast (and many understandingly avoid it as much as they can), and perhaps some VRFs and we will probably agree that that is an exception rather than a rule. And I deliberately exclude the actual underlying technologies to accomplish this, those don’t change the service, just enable it.
And it is not that networks are not capable of providing other services. Most hardware used is extremely capable of doing so much more, and in many cases even the configuration of that hardware is available. Extremely elaborate protocols exist to manage additional services, with new ones being developed constantly. And you can find paper after paper that show that specific network services can greatly improve the overall solution performance. Many of these examples are based on big data type solutions, but I am pretty sure that that translate into just about every solution that has a significant dependence on the network.
So why then do we not have a much richer set of network services available to the consumer of the networks?
There are probably multiple answers, but one that keeps bubbling to the top each time we look at this is one of abstraction. In simple terms, we have not made network services easy to create, easy to maintain, easy to debug, and most importantly, we have not made network services easy to consume. We talk about devops and the fact that the creation, debugging and maintenance of complex network services inside the core of a network is not at all trivial. Per the examples above, getting end to end QoS (consistent queuing really) in place seems like a simple task but is not. And that is technology that has been around for well over a decade. Configuring each and every switch to ensure it has the same queueing configuration and behaviors, adjust drop rates and queue lengths based on where a switch fits into the network and define what applications should fit into which queue is complex not because of the topic itself, but because of the amount of touch points, the amount of configuration steps, and the switch by switch, hop by hop mechanisms by which we deploy it. This is where devops will start.
But you also have to look at it from other side. In the first few slides of the above mentioned presentation, the presenter shows that the network engineer and the application engineer have wildly different views of the network. As they should. The application engineer should not need to know any of the ins and outs of the network and its behavior. He or she should be presented with an entity that provides connectivity, and a set of network services it offers. And it should be trivial to attach itself to any of these services without having to understand network terms. An application engineer should not need to know that DSCP bits need to be set to get a certain priority behavior. Or having to request from the network folks that a set of IP or ethernet endpoints require a lossless connectivity and must therefore be placed onto network paths that support PFC and QCN to enable RDMA over Ethernet or even FCoE.
These types of services need to become extremely easy to consume. The architect of a very large private cloud described his ideal model by which applications (and he supports thousands of them) would consume network services. He envisioned an application registration model (through a portal for instance) where application developers could express in extremely simple non network terms what their application needed. Connectivity between components X and Y. The use of specific memory systems that have been predefined to use RDMA over Ethernet (and thus require lossless connectivity). This application consists of N components that need PCI compliance and therefore need to be separated from the rest of the applications. You name it, application behavior in terms that are as far away from the actual implementation of the tools used to enable that service in the network.
There is lots of work to do on both ends of this consumable network service model. For the network engineer it needs to become much easy to enable these network services in a controllable and maintainable manner. Easy to design, easy to deploy, easy to debug and maintain. For the application engineer, it needs to become easy to consume these network services. Simple and scalable registration and request mechanisms without a lot of network terminology. My post office comparison from a few weeks ago was perhaps very simplistic, but you have to admit, using the USPS is pretty simple. You walk up to the counter, there is a menu of shipment options, each with a price and an expected result, you pick what you want, they charge you for it and off your package goes. And you don’t really worry or care too much how, just that it’s being delivered in accordance with the service you paid for….
[Today's fun fact: Stewardesses is the longest common word that is typed with only the left hand. As a result it has been banished in favor of flight attendant.]
Most of the IoT Gateway scenarios involve collecting data from machines/processing and pushing data upstream to cloud for further analytics. The gateway hardware varies from Raspberry Pi to Industrial PCs. The document states the process of allowing deploying polyglot data pipelining software with the clear notion of supporting immutability. In his session at @ThingsExpo, Shashank Jain, a development architect for SAP Labs, discussed the objective, which is to automate the IoT deployment process from development to production scenarios using Docker containers.
Dec. 1, 2015 01:15 AM EST Reads: 116
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction...
Dec. 1, 2015 01:00 AM EST Reads: 434
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, explored the value of Kibana 4 for log analysis and provided a hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He examined three use cases: IT operations, business intelligence, and security and compliance. Asaf Yigal is co-founder and VP of Product at log analytics software company Logz.io. In the past, he was co-founder of social-trading platform Currensee, which was later acquired by OANDA. He was also an early employee of server performance-monitoring company...
Nov. 30, 2015 10:00 PM EST Reads: 284
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
Nov. 30, 2015 01:45 PM EST Reads: 436
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Nov. 30, 2015 08:00 AM EST Reads: 572
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
Nov. 30, 2015 07:00 AM EST Reads: 476
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at Built.io, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
Nov. 30, 2015 06:00 AM EST Reads: 391
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
Nov. 30, 2015 04:00 AM EST Reads: 613
Continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things trend. To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Nov. 29, 2015 06:45 AM EST Reads: 753
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound cha...
Nov. 28, 2015 12:00 PM EST Reads: 570
Internet of @ThingsExpo, taking place June 7-9, 2016 at Javits Center, New York City and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
Nov. 27, 2015 12:00 PM EST Reads: 569
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
Nov. 25, 2015 12:00 PM EST Reads: 561
IBM has announced a new cloud-based service that enables developers to automatically translate cloud and mobile apps into the world's most-spoken languages. IBM Globalization Pipeline, now available in beta on Bluemix, IBM's Cloud platform, rapidly opens up new global markets to companies without requiring them to rebuild or redeploy their apps. The beta version will support English as the base language and nine additional languages including: French, German, Spanish, Brazilian Portuguese, Italian, Japanese, Simplified Chinese, Traditional Chinese, and Korean.
Nov. 25, 2015 08:00 AM EST
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists looked back at what DevOps has become, and forward at what it might create next.
Nov. 20, 2015 05:00 PM EST Reads: 395
SYS-CON Events announced today that TechTarget has been named “Media Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets.
Nov. 5, 2015 10:00 AM EST Reads: 583
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. O'Reilly spreads the knowledge of innovators through its technology books, online services, research, and tech conferences. An active participant in the technology community, O'Reilly has a long history of advocacy, meme-making, and evangelism.
Nov. 5, 2015 09:45 AM EST Reads: 550
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show how to distribute them to any kind of consumer, being it a customer or a data center.
Nov. 3, 2015 10:00 AM EST Reads: 732
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
Nov. 3, 2015 09:00 AM EST Reads: 517
Webair, a leading provider of Cloud Hosting, Colocation and Managed solutions, today announces that its Chief Technology Officer, Sagi Brody, will speak at Cloud Expo 2015 Silicon Valley, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, California. Cloud Expo 2015 Silicon Valley is a world-class conference that brings together thought-leaders and cutting edge practitioners in the cloud / utility computing, Big Data, Internet of Things (IoT), Software-Defined Data Center (SDDC), DevOps and Web Real-Time Communication (WebRTC) space, which, in addition to prest...
Nov. 3, 2015 06:00 AM EST Reads: 512
DevOps is a software development method that places emphasis on communications between Software Engineering, Quality Assurance and IT Operations (SEQAITO ) with the goal to produce software and services to improve, increase the operational performance for the Enterprise. Communications is key not only between the SEQAITO team members but also the communication between the applications and the SEQAITO team. How can an organization provide the human communication and the application communication to the SEQAITO team to ensure the successful development, deployment of the application?
Nov. 3, 2015 05:00 AM EST Reads: 470
Logz.io, the Predictive ELK (Elasticsearch, Logstash and Kibana) log analytics cloud service company, announced today that it was chosen to speak at DevOps Summit, which will take place on November 3-5 in Santa Clara, California. Logz.io will explore the entire process that we have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. We will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the architectur...
Nov. 3, 2015 12:45 AM EST Reads: 449
Automating AWS environments is important for all businesses as it simplifies creation and setup of cloud resources, facilitates otherwise complex processes, and streamlines management. The benefits of automation are clear: accelerate execution, reduce human error and unwanted consequences, and increase the enterprise’s ability to rapidly adapt, all while reducing the overall cost of IT operations. In his session at 17th Cloud Expo, Patrick McClory, Director of Automation and DevOps at Datapipe, dives deep into the technical specifics of automation for AWS including a discussion of best pract...
Nov. 2, 2015 06:00 PM EST Reads: 672
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, he will cover the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and provide examples on how an integrated approach has helped major companies embrace a cloud first,...
Nov. 2, 2015 06:00 PM EST Reads: 687
In a recent research, Analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Nov. 2, 2015 04:00 PM EST Reads: 521
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at Logz.io, and Tomer Levy, co-founder and CEO of Logz.io, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the architecture accomplishes these requirements. Lastly, they will review the gory details of the technolo...
Nov. 2, 2015 03:00 PM EST Reads: 654