Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski

RSS Feed Item

Re: A brief history of how we develop information systems

Roger--

The description of each of these stages seems awfully simplistic (I  
expect you know that), but stage 1 really needs some work.  You start  
out with "information systems" that "were decomposed" into  
applications.  In fact, of course, what you generally had to start  
with were individual applications that had been separately developed,  
each with its own "file or files" (not "databases"), and often with  
lots of redundancy in the various application files.   The whole  
"database" idea was an attempt to first at least identify, and then  
eliminate, this redundancy (and often associated inconsistency), all  
the redundant processing that was involved in keeping all these files  
updated (e.g., having to run multiple applications to keep "customer  
address" updated in multiple files when the customer moved), and the  
inflexibility when a new combination of data was needed for some new  
application.  The first stage was really "automate (part of) your own  
problem".   You can call each of those applications (or cluster of  
applications) an "information system" if you want, but the real  
"information system" thing started when people started to look at all  
those apps and their associated data as something to be organized (and  
it couldn't really have started before then).  At least that's my take.

--Frank

On Apr 13, 2009, at 7:46 AM, Costello, Roger L. wrote:

>
> Hi Folks,
>
> I've compiled, from the references listed at the bottom, a brief  
> history of the way that information systems are developed. Of  
> interest to me is that it shows the gradual liberating of data, user  
> interface, workflow, and most recently, enabling data to move about  
> freely.
>
> I welcome your thoughts.  /Roger
>
>
> 1. 1965-1975: Divide-and-Conquer
>
> Information systems were decomposed into applications, each with  
> their own databases.  There were few interactive programs, and those  
> that did exist had interfaces tightly coupled to the application  
> program. Workflow was managed individually and in non-standard ways.
>
>
> 2. 1975-1985: Standardize the Management of Data
>
> Data became a first class citizen. Managing the data was extracted  
> from application programs. Data was managed by a database management  
> system. Applications were able to focus on data processing, not data  
> management.
>
>
> 3. 1985-1995: Standardize the Management of User Interface
>
> As more and more interactive software was developed, user interfaces  
> were extracted from the applications. User interfaces were developed  
> in a standard way.
>
>
> 4. 1995-2005: Standardize the Management of Workflow
>
> The business processes and their handling were isolated and  
> extracted from applications, and specified in a standard way. A  
> workflow management system managed the workflows and organized the  
> processing of tasks and the management of resources.
>
>
> 5. 2005-2009: Data-on-the-Move (Portable Data)
>
> Rather than data sitting around in a database waiting to be queried  
> by applications, data became portable, enabling applications to  
> exchange, merge, and transform data in mobile documents.   
> Standardized data formats (i.e. standardized XML vocabularies)  
> became important. Artifact-, document-centric architectures became  
> common.
>
>
> References:
>
> 1. Workflow Management by Wil van der Aalst and Kees van Hee
> http://www.amazon.com/Workflow-Management-Methods-Cooperative-Information/dp/0262720469/ref=sr_1_1?ie=UTF8&s=books&qid=1239573871&sr=8-1
>
> 2. Building Workflow Applications by Michael Kay
> http://www.stylusstudio.com/whitepapers/xml_workflow.pdf
>
> 3. Business artifacts: An approach to operational specification by  
> A. Nigam and N.S. Caswell
> http://findarticles.com/p/articles/mi_m0ISJ/is_3_42/ai_108049865/
>
> _______________________________________________________________________
>
> XML-DEV is a publicly archived, unmoderated list hosted by OASIS
> to support XML implementation and development. To minimize
> spam in the archives, you must subscribe before posting.
>
> [Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
> Or unsubscribe: [email protected]
> subscribe: [email protected]
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines: http://www.oasis-open.org/maillists/guidelines.php
>

Read the original blog entry...

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, Alex Lovell-Troy, Director of Solutions Engineering at Pythian, presented a roadmap that can be leveraged by any organization to plan, analyze, evaluate, and execute on moving from configuration management tools to cloud orchestration tools. He also addressed the three major cloud vendors as well as some tools that will work with any cloud.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.