Welcome!

@DevOpsSummit Authors: Dalibor Siroky, Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog

RSS Feed Item

Re: A brief history of how we develop information systems

Roger--

The description of each of these stages seems awfully simplistic (I  
expect you know that), but stage 1 really needs some work.  You start  
out with "information systems" that "were decomposed" into  
applications.  In fact, of course, what you generally had to start  
with were individual applications that had been separately developed,  
each with its own "file or files" (not "databases"), and often with  
lots of redundancy in the various application files.   The whole  
"database" idea was an attempt to first at least identify, and then  
eliminate, this redundancy (and often associated inconsistency), all  
the redundant processing that was involved in keeping all these files  
updated (e.g., having to run multiple applications to keep "customer  
address" updated in multiple files when the customer moved), and the  
inflexibility when a new combination of data was needed for some new  
application.  The first stage was really "automate (part of) your own  
problem".   You can call each of those applications (or cluster of  
applications) an "information system" if you want, but the real  
"information system" thing started when people started to look at all  
those apps and their associated data as something to be organized (and  
it couldn't really have started before then).  At least that's my take.

--Frank

On Apr 13, 2009, at 7:46 AM, Costello, Roger L. wrote:

>
> Hi Folks,
>
> I've compiled, from the references listed at the bottom, a brief  
> history of the way that information systems are developed. Of  
> interest to me is that it shows the gradual liberating of data, user  
> interface, workflow, and most recently, enabling data to move about  
> freely.
>
> I welcome your thoughts.  /Roger
>
>
> 1. 1965-1975: Divide-and-Conquer
>
> Information systems were decomposed into applications, each with  
> their own databases.  There were few interactive programs, and those  
> that did exist had interfaces tightly coupled to the application  
> program. Workflow was managed individually and in non-standard ways.
>
>
> 2. 1975-1985: Standardize the Management of Data
>
> Data became a first class citizen. Managing the data was extracted  
> from application programs. Data was managed by a database management  
> system. Applications were able to focus on data processing, not data  
> management.
>
>
> 3. 1985-1995: Standardize the Management of User Interface
>
> As more and more interactive software was developed, user interfaces  
> were extracted from the applications. User interfaces were developed  
> in a standard way.
>
>
> 4. 1995-2005: Standardize the Management of Workflow
>
> The business processes and their handling were isolated and  
> extracted from applications, and specified in a standard way. A  
> workflow management system managed the workflows and organized the  
> processing of tasks and the management of resources.
>
>
> 5. 2005-2009: Data-on-the-Move (Portable Data)
>
> Rather than data sitting around in a database waiting to be queried  
> by applications, data became portable, enabling applications to  
> exchange, merge, and transform data in mobile documents.   
> Standardized data formats (i.e. standardized XML vocabularies)  
> became important. Artifact-, document-centric architectures became  
> common.
>
>
> References:
>
> 1. Workflow Management by Wil van der Aalst and Kees van Hee
> http://www.amazon.com/Workflow-Management-Methods-Cooperative-Information/dp/0262720469/ref=sr_1_1?ie=UTF8&s=books&qid=1239573871&sr=8-1
>
> 2. Building Workflow Applications by Michael Kay
> http://www.stylusstudio.com/whitepapers/xml_workflow.pdf
>
> 3. Business artifacts: An approach to operational specification by  
> A. Nigam and N.S. Caswell
> http://findarticles.com/p/articles/mi_m0ISJ/is_3_42/ai_108049865/
>
> _______________________________________________________________________
>
> XML-DEV is a publicly archived, unmoderated list hosted by OASIS
> to support XML implementation and development. To minimize
> spam in the archives, you must subscribe before posting.
>
> [Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
> Or unsubscribe: [email protected]
> subscribe: [email protected]
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines: http://www.oasis-open.org/maillists/guidelines.php
>

Read the original blog entry...

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.