Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Feed Post

Major Paradigm Shift By @JPMorgenthal | @DevOpsSummit [#DevOps #PaaS #Microservices]

DevOps principles and the corresponding software development lifecycles that does enable higher quality output

Cloud Native: The Next Major Developer Paradigm Shift

Every decade for the past forty years we’ve seen a major paradigm shift in software development. The next major paradigm shift for software developers—cloud native application development—is occurring now and it will require developers to once again re-think and modify their approach to developing software.

In the seventies we saw the transition from assembler to third and fourth generation programming languages. This change gave rise to implicit processing based on language constructs and the use of functional notation.

In the eighties we saw a paradigm shift from procedural programming to object-oriented and with it the introduction of constructs like inheritance and polymorphism. This gave rise to the ability to develop abstract models that were fluid and could be implemented concretely across an abundance of compute architectures.  However, for developers trained in the concrete design of functional notation, abstraction often represented a major hurdle to overcome. In both of these cases the developer had to learn to think differently about how to structure their code in order to best leverage the power of these advancements.

In the nineties we saw a paradigm shift from local programming models to distributed programming models. Frameworks like Remote Procedure Calls and Remote Invocation attempted to limit the impact of this shift to developers by providing a façade that mimicked a local programming model hiding the complexities of interacting with a remote service, but ultimately, developers had to understand the nuances of developing distributed applications in order build in resiliency and troubleshoot failure.

In the early 2000’s we saw a paradigm shift due to the advancement of chip and operating system architecture to support multiple applications running simultaneously. Now applications could run multiple threads of execution in parallel supporting multiple users. This caused developers to now re-think how they structured their code and leveraged global and local variable usage. Many soon found their existing programming methods incompatible with these new requirements as threads were stepping all over each other and corrupting data.

The next major paradigm shift in software development is now upon us and it is developing cloud native applications. Building on the paradigm shift of the early 2000’s applications must now be able to scale across machines as well within a single machine. Once again, developers will need to rethink how they develop applications to leverage the power of the cloud and escape the limitations and confines of current distributed application development models.

There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way to learn is to actually put code into production and monitor its performance and actions.

I believe one of the greatest flaws we have seen in software engineering is that none of these great paradigm shifts has resulted in instrumentation becoming integral to the application design itself. It seems there has been a considerable disregard for the scientist (developer) to watch their experiment in the wild. Perhaps it’s the deadlines and backlog that have limited this requirement as well as the failure of higher education to incorporate this thinking at the university level, but I digress.

The reality is each major shift has brought about tremendous power for businesses to harness compute for growth and financial gain. Each change has allowed more powerful applications to be developed faster. But, with each paradigm shift we have to realize there is a need for skills to catch up to the technology. During that period, we have to accept that to take advantage of advances in compute and information technology, the software will be more brittle and more failures will be incurred.

Along the way, vacuums will be formed due to limited resources and there will be those who look for ways to fill the talent void by providing models that bridge the current and future paradigms. In the 90’s we had DCE and CORBA, in the early 2000’s we had application servers and now we have Platform-as-a-Service (PaaS). These platforms are developed by the leading edge that not only helped foster the paradigm shift but are also tooling the workforce to take advantage of these new capabilities.

So, while PaaS may not be seen as essential by businesses yet and adoption may be low, this will change as more and more businesses attempt to enter the digital revolution and realize they do not have the staff and skills to deliver resilient and scalable cloud native applications. They will be forced to rely on PaaS as a delivery system that allows developers to build components using their understanding of building distributed applications and then takes those components and adds availability, scalability, security and other cloud-centric value.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.