Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner, Yeshim Deniz

Related Topics: Open Source Cloud

Open Source Cloud: Article

So How Come the New Economy Bombed

So How Come the New Economy Bombed

Well, it seems it wasn't just God's work smiting - like Sodom and Gomorra - the insufferably stupid, vapid, and hollow ideas that flourished under the New Economy's brief sojourn and the insufferably greedy and undeserving people who went along with them that ultimately did the New Economy in.

Nope.

As comforting as that thought might be, it turns out it might have been the architecture.

Tim Negris, the guy who coined the phrase "thin client" when he was a VP at Oracle, which was before he was a VP at IBM Software, and sat out the Internet bubble thinking, and putting his money into real estate - right away an indication that he's smart - figures the Internet - and with it the New Economy - tripped over its own shoelaces because it was based on an incarnation of client/server technology when it should have been peer-to-peer.

If it had been peer-to-peer, or if it used the newfangled Network Business Model that Tim calls "Peer Services Computing," the New Economists wouldn't have needed all that money to spend buying Sun servers, whose ROI was out to heeere, and pressured them into trying to get lots of traffic in an impossibly short period of time at high enough fees to pay for all the fancy technology.

There was a basic economic disconnect between the client/server model and the Internet.

Client/server widgetry is Old Economy overkill, Tim says, wasting a thousand trillion CPU cycles, one hundred trillion bytes of memory and ten thousand trillion bytes of disk space every second of every day.

Tim compares the New Economy to the Afghan economy and calls it a "chimerical abstraction projected onto a lawless and primitive territory." Client/server didn't so much kill the New Economy as keep it from being born, he says, forcing it into old-style restrictions like consolidating data on centralized database servers - an article of client/server faith that didn't work for the New Economy. Like just who exactly owned the data in those big database servers?

P2P or a "peer grid," on the other hand, seems to be just the ticket. It offers equal or proportionate direct technology cost-sharing among the participants - costs are directly tied to need and the widgetry is relatively cheap - distributed workload sharing across participating systems and owner-controlled information visibility, security, privacy and use.

Of course, all the pieces needed to build Peer Services Computing aren't quite here yet. Web services, for instances, the latest identity crisis-prone techno craze, are currently conceived of as a one-way street rather than as something that will go out and actively round up consumers. Peer Services, Tim says, are active services and consuming and providing are flip sides of the same coin.

Peer Services - for reasons of "invertible security," control, efficiency and flexibility - need some data stored and managed privately on participating peers and other data aggregated on the operator's system, also technically a peer, a situation that will require a new range of information management capabilities that don't exist yet.

They will need, for instance, the kind of data management that can separate out the metadata from the data itself across systems and a singular central set of data definitions and XML structures so Peer Services apps can function properly. And then, since the data needs to be live and capable of advertising itself, describing itself and authorizing itself, it will need new facilities for distributed transactional metadata that allow searching and selecting, say, to be done in a safe shared fashion, which introduces the idea of meta-metadata or data about data about data, but then you just have to read Tim's Manifesto at www. Equinom.com.

Equinom Inc is the new company Tim and folks like CTO Lynne Thieme and VP of engineering David Thieme, her brother, whose combined fingerprints are all over such technologies as IBM's Fast Path transaction processing and massively parallel databases as well as thin clients, have put together in Carmel, California.

Egged on by technical advisor Robin Bloor, head of Bloor Research, business advisor John Burns, a principal in Frontier Risk Capital Management, and marketing advisor Scott Anderson, a principal in The Shadow Marketing Network, they are developing the enabling software for transactional P2P, enhanced Web services and the peer services building blocks that other software developers can use to build Peer Services middleware and applications. And to make sure the notions take - they figure it will only happen through broad participation by industry, government and the academe - they're going to make their software available free of charge to schools, non-profit trade groups and government agencies.

A proof-of-concept should be out in the next few weeks.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.