|By Daniel Thompson||
|February 15, 2013 10:00 AM EST||
My working title was Big Data, Storage Dilemma.
They say dilemma. I say dilemna. I'm serious. I spell it dilemna.
Big Data presents something of a storage dilemma. There is no one data store to rule them all.
Should different data structures be persisted to different storage mediums?
Identifying the appropriate medium is a function of performance, cost, and capacity.
Random Access Memory
It's fast. Very. It's expense. Very.
If we are configuring an HP DL980 G7 server, it will cost $3,672 for 128GB of memory with 8x 16GB modules or $9,996 for 128GB of memory with 4x 32GB modules. That is $28-78 / GB.
We can configure an HP DL980 G7 server with up to 4TB of memory.
Solid State Drive
It's not as fast or as expensive as random access memory. It's faster than a hard disk drive. A lot faster. It's more expensive than a hard disk drive. A lot more expensive.
If we are configuring an HP DL980 G7 server, it will cost $4,199 for a 400GB SAS MLC drive or $7,419 for a 400GB SAS SLC drive. That is $10-18 / GB. It's a performance / size trade-off. While SLC drives often perform better, MLC drives are often available in larger sizes. Further, the read / write performance is either symmetric or asymmetric.
That, and SLC drives often have greater endurance.
|Intel 520 (480GB)||550||520||50,000||50,000|
|Intel 520 (240GB)||550||520||50,000||80,000|
|Intel DC S3700||500||460||75,000||36,000|
The performance of solid state drives can vary:
- consumer / enterprise
- MLC / eMLC / SLC
- SATA / SAS / PCIe
The capacity of enterprise solid state drives is often less than that of enterprise hard disk drives (4TB). However, the capacity of PCIe drives such as the Fusion-io ioDrive Octal (5.12TB / 10.24TB) is greater than that of enterprise hard disk drives (4TB). In addition, the performance of PCIe drives is greater than that of SAS or SATA drives.
|Intel 910 (800GB)||2,000||1,000||180,000||75,000|
Hard Disk Drive
It may not be fast, but it is inexpensive.
If we are configuring an HP DL980 G7 server, it will cost $309 for a 300GB 10K drive or $649 for a 300GB 15K drive. That is $1-2 / GB. It's a performance / size trade-off. While a 15K drive will perform better than a 10K drive, it will often have less capacity. The sequential read / write performance of hard disk drives is often between 150-200MB/s. As such, a RAID configuration may be a cost effective alternative to a single solid state drive for sequential read / write access.
The capacity of enterprise hard disk drives (4TB) is often greater than that of enterprise solid state drives.
JBoss Data Grid is an in-memory data grid with the data stored in a hash table. However, JBoss Data Grid supports persistence via write-behind or write-through. For all intents and purposes (e.g. map / reduce), all of the data should fit in memory.
- Random Reads, Random Writes
Riak is a key / value store. If persistence has been configured with Bitcast, the data is stored in a log structured hash table. An in-memory hash table contains the key / value pointers. The value is a pointer to the data. As such, all of the key / value pointers must fit in memory. The data is persisted via append only log files.
- Complex Index in Memory
- Random Reads, Sequential Writes
Point queries perform well in hash tables. Range queries do not. Though there is always map / reduce.
MongoDB is a document database with the data stored in a B-Tree. The data is persisted via memory mapped files.
- Partial Index (Internal Nodes) in Memory
- Random Reads, Sequential Reads, Random Writes
CouchDB is a document database with the data stored in a B+Tree. The data is persisted via append only log files.
- Partial Index in Memory
- Random Reads, Sequential Reads, Sequential Writes
In a B+ Tree, the data is only stored in the leaf nodes. In a B-Tree, the data is stored in both the internal nodes and the leaf nodes. The advantage of a B+ Tree is that the leaf nodes are linked. As a result, range queries perform better with a B+ Tree. However, point queries perform better with a B-Tree.
CouchDB has implemented an append only B+ Tree. An alternative is the copy-on-write (CoW) B+ Tree. A write optimization for a B-Tree is buffering. First, the data written to an internal buffer in an internal node. Second, the buffer is flushed to a leaf node. As a result, random writes are turned in to sequential writes. The cost of random writes is thus amortized. However, I am not aware of any open source data stores that have implemented a CoW B+ Tree or have implemented buffering with a B-Tree.
Range queries perform well with a B/B+ Tree. Point queries, not as well as hash tables.
Log Structured Merge Tree
Apache HBase and Apache Cassandra are both column oriented data stores, and they have both store data in an LSM-Tree. Apache HBase has implemented a cache oblivious lookahead array (COLA). Apache Cassandra has implemented a sorted array merge tree (SAMT).
In both implementations, a write is first written to a write-ahead log. Next, it is written to a memtable. A memtable is an in-memory sorted string table (SSTable). Later, the memtable is flushed and persisted as an SSTable. As result, random writes are turned in to sequential writes. However, a point query with an LSM-Tree may require multiple random reads.
Apache HBase implemented a single-level index with HFile version 1. However, because every index was cached, it resulted in high memory usage.
Apache HBase implemented a multi-level index a la a B+ Tree with HFile (SSTable) version 2. The SSTable contains a root index, a root index with leaf indexes, or a root index with an intermediate index and leaf indexes. The root index is stored in memory. The intermediate and leaf indexes may be stored in the block cache. In addition, the SSTable contains a compound (block level) bloom filter. The bloom filter is used to determine if the data is not in the data block.
Recommendation / Access
- Partial Index / Bloom Filter in Memory
- Random Reads, Sequential Reads, Sequential Writes
A read optimization for an LSM-Tree is fractional cascading. The idea is that each level contains both data and pointers to data in the next level. However, I am not aware of any open source data stores that have implemented fractional cascading.
I like the idea of a hybrid / tiered storage solution for an LSM-Tree implementation with the first level in memory, second level on solid state drives, and third level on hard disk drives. I've seen this solution described in academia, but I am not aware of any open source implementations.
I find it interesting that key / value stores often store data in a hash table, that document stores often store data in a B+/B-Tree, and that column oriented stores often store data in an LSM-Tree. Perhaps it is because key / value stores focus on the performance of point queries, document stores support secondary indexes (MongoDB) / views (CouchDB), and column oriented stores support range queries. Perhaps it because document stores were not created with distribution (shards / partitions) in mind and thus assume that the complete index can not be stored in memory.
I found this paper to be very helpful in examining data structures as it covers most of the ones highlighted in this post:
Efficient, Scalable, and Versatile Application and System Transaction Management for Direct Storage Layers (link)
Cloudwick, the leading big data DevOps service and solution provider to the Fortune 1000, announced Big Loop, its multi-vendor operations platform. Cloudwick Big Loop creates greater collaboration between Fortune 1000 IT staff, developers and their database management systems as well as big data vendors. This allows customers to comprehensively manage and oversee their entire infrastructure, which leads to more successful production cluster operations, and scale-out. Cloudwick Big Loop supports ...
Oct. 20, 2014 02:15 PM EDT Reads: 1,514
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. ...
Oct. 20, 2014 02:00 PM EDT Reads: 1,416
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation...
Oct. 20, 2014 01:45 PM EDT Reads: 1,397
StackIQ offers a comprehensive software suite that automates the deployment, provisioning, and management of Big Infrastructure. With StackIQ’s software, you can spin up fully configured big data clusters, quickly and consistently — from bare-metal up to the applications layer — and manage them efficiently. Our software’s modular architecture allows customers to integrate nearly any application with the StackIQ software stack.
Oct. 20, 2014 01:00 PM EDT Reads: 1,318
SimpleECM is the only platform to offer a powerful combination of enterprise content management (ECM) services, capture solutions, and third-party business services providing simplified integrations and workflow development for solution providers. SimpleECM is opening the market to businesses of all sizes by reinventing the delivery of ECM services. Our APIs make the development of ECM services simple with the use of familiar technologies for a frictionless integration directly into web applicat...
Oct. 20, 2014 01:00 PM EDT Reads: 1,294
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, will address the big issues involving these technologies and, more important, the results they will achieve. How important are public, private, and hybrid cloud to the enterprise? How does one define Big Data? And how is the IoT tying all this together?
Oct. 20, 2014 12:00 PM EDT Reads: 1,460
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce t...
Oct. 20, 2014 12:00 PM EDT Reads: 1,750
TechCrunch reported that "Berlin-based relayr, maker of the WunderBar, an Internet of Things (IoT) hardware dev kit which resembles a chunky chocolate bar, has closed a $2.3 million seed round, from unnamed U.S. and Switzerland-based investors. The startup had previously raised a €250,000 friend and family round, and had been on track to close a €500,000 seed earlier this year — but received a higher funding offer from a different set of investors, which is the $2.3M round it’s reporting."...
Oct. 20, 2014 09:00 AM EDT Reads: 1,388
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Oct. 19, 2014 10:45 PM EDT Reads: 1,884
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, da...
Oct. 19, 2014 10:00 PM EDT Reads: 1,367
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. Over the summer Gartner released its much anticipated annual Hype Cycle report and the big news is that Internet of Things has now replaced Big Data as the most hyped technology. Indeed, we're hearing more and more about this fascinating new technological paradigm. ...
Oct. 19, 2014 09:00 PM EDT Reads: 1,580
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services.
Oct. 19, 2014 11:15 AM EDT Reads: 1,871
The Internet of Things needs an entirely new security model, or does it? Can we save some old and tested controls for the latest emerging and different technology environments? In his session at Internet of @ThingsExpo, Davi Ottenheimer, EMC Senior Director of Trust, will review hands-on lessons with IoT devices and reveal privacy options and a new risk balance you might not expect.
Oct. 19, 2014 11:00 AM EDT Reads: 1,816
SYS-CON Events announced today that Objectivity, Inc., the leader in real-time, complex Big Data solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Objectivity, Inc. is the Enterprise Database leader of real-time, complex Big Data solutions. Our leading edge technologies – InfiniteGraph®, The Distributed Graph Database™ and Objectivity/DB®, a distributed and scalable object ma...
Oct. 18, 2014 12:00 PM EDT Reads: 1,663
In their session at DevOps Summit, Stan Klimoff, CTO of Qubell, and Mike Becker, Senior Data Engineer for RingCentral, will share the lessons learned from implementing CI/CD pipeline on AWS for a customer analytics project powered by Cloudera Hadoop, HP Vertica and Tableau. Stan Klimoff is CTO of Qubell, the enterprise DevOps platform. Stan has more than a decade of experience building distributed systems for companies such as eBay, Cisco and Seagate. Qubell is helping enterprises to become mor...
Oct. 17, 2014 08:00 PM EDT Reads: 1,565
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo, moderated by Ashar Baig, Research ...
Oct. 17, 2014 07:45 PM EDT Reads: 1,558
What are the benefits of using an enterprise-grade orchestration platform? In their session at 15th Cloud Expo, Jeff Tegethoff, CEO of Appcore, and Kedar Poduri, Senior Director of Product Management at Citrix Systems, will take a closer look at the architectural design factors needed to support diverse workloads and how to run these workloads efficiently as a service provider. They will also discuss how to deploy private cloud environments in 15 minutes or less.
Oct. 17, 2014 05:00 PM EDT Reads: 1,977
Seagate has a strong track record of collaborating with others to develop better cloud solutions. The Seagate Cloud Builder Alliance program, for example, leverages the company’s knowledge of storage and cloud-optimized solutions to give cloud service providers the customized, flexible and scalable server and storage solutions to meet the high levels of service their customers demand. Seagate also is a member of the OpenStack Foundation and Open Compute Project to help define and promote open-so...
Oct. 17, 2014 03:30 PM EDT Reads: 1,447
Big Data means many things to many people. From November 4-6 at the Santa Clara Convention Center, thousands of people will gather at Big Data Expo to discuss what it means to them, how they are implementing it, and how Big Data plays an integral role in the maturing cloud computing world and emerging Internet of Things. Attend Big Data Expo and make your contribution. Register for Big Data Expo "FREE" with Discount Code "BigDataOCTOBER" by October 31
Oct. 17, 2014 03:00 PM EDT Reads: 1,637
IBM and SAP announced a major strategic agreement that integrates Big Blue's datacenter capabilities with Big Hasso's ongoing HANA cloud-based database initiative. IBM is investing $1.2 billion to build 15 new datacesnters as part of its SoftLayer acquisition and expansion, and will also use its existing IBM Cloud servers to support the new agreement with SAP. The announcement was made jointly in Armonk, NY and Walldorf, Germany. IBM is described by SAP as a “premier strategic provider” of cl...
Oct. 17, 2014 01:00 PM EDT Reads: 1,932