Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo, Recurring Revenue

@CloudExpo: Article

Oracle RDBMS and Very Large Data Set Processing

An overview of very large data set processing technologies with an emphasis on Oracle RDBMS

Oracle database is a relational database management system that mostly complies with ACID transaction requirements ( atomicity, consistency, isolation, durability ). It means that each database transaction will be executed in a reliable, safe and integral manner. In order to comply with ACID Oracle database software implements fairly complex and expensive (in terms of computing resources, i.e., CPU, disk, memory) set of processes like redo and undo logging, memory latching, meta data maintenance etc. that make concurrent work possible, while maintaining data integrity. Any database transaction or even SELECT statement makes relational database systems perform tremendous amounts of work behind the scene, thus making it inherently slow and resource intensive.

Oracle is trying to address scalability and performance problem in a variety of ways:

- by introducing constant performance enhancements to the query optimizer

- Oracle RAC - Real Application Clusters based scale out ( much increased complexity with little practical value in terms of performance and functionality )

- appliances ( Exa* line of products - complex, unbalanced architecture, suboptimally utilized hardware,  patched up and repackaged  software )

All these attempt still feature performance and scalability bottleneck in shape of Oracle RDBMS and its shared-nothing, or assymmetric MPP ( in case of Exadata ) architecture.

Companies dealing with millions of users, huge volumes of data and needing great performance like Google could not use proprietary RDBMSs.  They developed their own solutions, relying on utilizing commodity hardware and open source software. They developed software that can make thousands of Intel boxes behave like a single system that can process your query searching through petabytes of data  in sub-second time. This could never be accomplished if they used standard RDBMS like Oracle. RDBMSs were not designed to deal with problems that Google is facing.

Data processing technologies that originated at Google or other places ( including direct Google research descendant Hadoop,  NoSQL group of products, etc. )  parallelize work and distribute data over thousands of servers, relax ACID requirement so it now maybe becomes BASE (Basically Available, Soft state, Eventual consistency) i.e., they provide weaker consistency guarantees ( CAP theorem ), they loose relational data structure i.e. basically go back to flat files, or distributed, scalable hash tables. By relaxing, modifying or loosing some properties of RDBMs and optimizing to run on commodity hardware they were able to get  results that are good enough in terms of data quality and consistency, while achieving great performance and sufficient accuracy for very basic transactions. Ironically many of these technologies are step backwards i.e. they will end up reinventing many RDBMS features as they mature ( Hadapt - Hadoop based relational database; Hive, Hbase ).

Oracle RDBMS has completely different purpose, i.e., it is targeted for corporate/business use where complex transactions must be accurately executed. Eventually consistent paradigm does not suffice here. While Oracle database can contain huge volumes of multi-media data, its main purpose is to store and concurrently process structured data sets relatively limited in size, for a limited number of concurrent corporate/business users.

More Stories By Ranko Mosic

Ranko Mosic, BScEng, is specializing in Big Data/Data Architecture consulting services ( database/data architecture, machine learning ). His clients are in finance, retail, telecommunications industries. Ranko is welcoming inquiries about his availability for consulting engagements and can be reached at 408-757-0053 or [email protected]

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.