Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Stefana Muller, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Feed Post

Major Paradigm Shift By @JPMorgenthal | @DevOpsSummit [#DevOps #PaaS #Microservices]

DevOps principles and the corresponding software development lifecycles that does enable higher quality output

Cloud Native: The Next Major Developer Paradigm Shift

Every decade for the past forty years we’ve seen a major paradigm shift in software development. The next major paradigm shift for software developers—cloud native application development—is occurring now and it will require developers to once again re-think and modify their approach to developing software.

In the seventies we saw the transition from assembler to third and fourth generation programming languages. This change gave rise to implicit processing based on language constructs and the use of functional notation.

In the eighties we saw a paradigm shift from procedural programming to object-oriented and with it the introduction of constructs like inheritance and polymorphism. This gave rise to the ability to develop abstract models that were fluid and could be implemented concretely across an abundance of compute architectures.  However, for developers trained in the concrete design of functional notation, abstraction often represented a major hurdle to overcome. In both of these cases the developer had to learn to think differently about how to structure their code in order to best leverage the power of these advancements.

In the nineties we saw a paradigm shift from local programming models to distributed programming models. Frameworks like Remote Procedure Calls and Remote Invocation attempted to limit the impact of this shift to developers by providing a façade that mimicked a local programming model hiding the complexities of interacting with a remote service, but ultimately, developers had to understand the nuances of developing distributed applications in order build in resiliency and troubleshoot failure.

In the early 2000’s we saw a paradigm shift due to the advancement of chip and operating system architecture to support multiple applications running simultaneously. Now applications could run multiple threads of execution in parallel supporting multiple users. This caused developers to now re-think how they structured their code and leveraged global and local variable usage. Many soon found their existing programming methods incompatible with these new requirements as threads were stepping all over each other and corrupting data.

The next major paradigm shift in software development is now upon us and it is developing cloud native applications. Building on the paradigm shift of the early 2000’s applications must now be able to scale across machines as well within a single machine. Once again, developers will need to rethink how they develop applications to leverage the power of the cloud and escape the limitations and confines of current distributed application development models.

There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way to learn is to actually put code into production and monitor its performance and actions.

I believe one of the greatest flaws we have seen in software engineering is that none of these great paradigm shifts has resulted in instrumentation becoming integral to the application design itself. It seems there has been a considerable disregard for the scientist (developer) to watch their experiment in the wild. Perhaps it’s the deadlines and backlog that have limited this requirement as well as the failure of higher education to incorporate this thinking at the university level, but I digress.

The reality is each major shift has brought about tremendous power for businesses to harness compute for growth and financial gain. Each change has allowed more powerful applications to be developed faster. But, with each paradigm shift we have to realize there is a need for skills to catch up to the technology. During that period, we have to accept that to take advantage of advances in compute and information technology, the software will be more brittle and more failures will be incurred.

Along the way, vacuums will be formed due to limited resources and there will be those who look for ways to fill the talent void by providing models that bridge the current and future paradigms. In the 90’s we had DCE and CORBA, in the early 2000’s we had application servers and now we have Platform-as-a-Service (PaaS). These platforms are developed by the leading edge that not only helped foster the paradigm shift but are also tooling the workforce to take advantage of these new capabilities.

So, while PaaS may not be seen as essential by businesses yet and adoption may be low, this will change as more and more businesses attempt to enter the digital revolution and realize they do not have the staff and skills to deliver resilient and scalable cloud native applications. They will be forced to rely on PaaS as a delivery system that allows developers to build components using their understanding of building distributed applications and then takes those components and adds availability, scalability, security and other cloud-centric value.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
An edge gateway is an essential piece of infrastructure for large scale cloud-based services. In his session at 17th Cloud Expo, Mikey Cohen, Manager, Edge Gateway at Netflix, detailed the purpose, benefits and use cases for an edge gateway to provide security, traffic management and cloud cross region resiliency. He discussed how a gateway can be used to enhance continuous deployment and help testing of new service versions and get service insights and more. Philosophical and architectural approaches to what belongs in a gateway vs what should be in services were also discussed. Real examples of how gateway services are used in front of nearly all of Netflix's consumer facing traffic showed how gateway infrastructure is used in real highly available, massive scale services.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.