Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Feed Post

Major Paradigm Shift By @JPMorgenthal | @DevOpsSummit [#DevOps #PaaS #Microservices]

DevOps principles and the corresponding software development lifecycles that does enable higher quality output

Cloud Native: The Next Major Developer Paradigm Shift

Every decade for the past forty years we’ve seen a major paradigm shift in software development. The next major paradigm shift for software developers—cloud native application development—is occurring now and it will require developers to once again re-think and modify their approach to developing software.

In the seventies we saw the transition from assembler to third and fourth generation programming languages. This change gave rise to implicit processing based on language constructs and the use of functional notation.

In the eighties we saw a paradigm shift from procedural programming to object-oriented and with it the introduction of constructs like inheritance and polymorphism. This gave rise to the ability to develop abstract models that were fluid and could be implemented concretely across an abundance of compute architectures.  However, for developers trained in the concrete design of functional notation, abstraction often represented a major hurdle to overcome. In both of these cases the developer had to learn to think differently about how to structure their code in order to best leverage the power of these advancements.

In the nineties we saw a paradigm shift from local programming models to distributed programming models. Frameworks like Remote Procedure Calls and Remote Invocation attempted to limit the impact of this shift to developers by providing a façade that mimicked a local programming model hiding the complexities of interacting with a remote service, but ultimately, developers had to understand the nuances of developing distributed applications in order build in resiliency and troubleshoot failure.

In the early 2000’s we saw a paradigm shift due to the advancement of chip and operating system architecture to support multiple applications running simultaneously. Now applications could run multiple threads of execution in parallel supporting multiple users. This caused developers to now re-think how they structured their code and leveraged global and local variable usage. Many soon found their existing programming methods incompatible with these new requirements as threads were stepping all over each other and corrupting data.

The next major paradigm shift in software development is now upon us and it is developing cloud native applications. Building on the paradigm shift of the early 2000’s applications must now be able to scale across machines as well within a single machine. Once again, developers will need to rethink how they develop applications to leverage the power of the cloud and escape the limitations and confines of current distributed application development models.

There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way to learn is to actually put code into production and monitor its performance and actions.

I believe one of the greatest flaws we have seen in software engineering is that none of these great paradigm shifts has resulted in instrumentation becoming integral to the application design itself. It seems there has been a considerable disregard for the scientist (developer) to watch their experiment in the wild. Perhaps it’s the deadlines and backlog that have limited this requirement as well as the failure of higher education to incorporate this thinking at the university level, but I digress.

The reality is each major shift has brought about tremendous power for businesses to harness compute for growth and financial gain. Each change has allowed more powerful applications to be developed faster. But, with each paradigm shift we have to realize there is a need for skills to catch up to the technology. During that period, we have to accept that to take advantage of advances in compute and information technology, the software will be more brittle and more failures will be incurred.

Along the way, vacuums will be formed due to limited resources and there will be those who look for ways to fill the talent void by providing models that bridge the current and future paradigms. In the 90’s we had DCE and CORBA, in the early 2000’s we had application servers and now we have Platform-as-a-Service (PaaS). These platforms are developed by the leading edge that not only helped foster the paradigm shift but are also tooling the workforce to take advantage of these new capabilities.

So, while PaaS may not be seen as essential by businesses yet and adoption may be low, this will change as more and more businesses attempt to enter the digital revolution and realize they do not have the staff and skills to deliver resilient and scalable cloud native applications. They will be forced to rely on PaaS as a delivery system that allows developers to build components using their understanding of building distributed applications and then takes those components and adds availability, scalability, security and other cloud-centric value.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are never going to stop. We're human. We miss bugs. Or we fat finger a command - deleting dozens of servers and bringing down S3 in US-EAST-1 for hours - effectively halting the internet. These things happen.
The digital transformation is real! To adapt, IT professionals need to transform their own skillset to become more multi-dimensional by gaining both depth and breadth of a wide variety of knowledge and competencies. Historically, while IT has been built on a foundation of specialty (or "I" shaped) silos, the DevOps principle of "shifting left" is opening up opportunities for developers, operational staff, security and others to grow their skills portfolio, advance their careers and become "T"-shaped.