Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo, Linux Containers, Open Source Cloud, Apache, @DXWorldExpo

@CloudExpo: Article

Apache Spark vs. Hadoop | @CloudExpo #BigData #DevOps #Microservices

A choice of job styles

If you’re running Big Data applications, you’re going to want to look at some kind of distributed processing system. Hadoop is one of the best-known clustering systems, but how are you going to process all your data in a reasonable time frame? Apache Spark offers services that go beyond a standard MapReduce cluster.

A choice of job styles
MapReduce has become a standard, perhaps
the standard, for distributed file systems. While it’s a great system already, it’s really geared toward batch use, with jobs needing to queue for later output. This can severely hamper your flexibility. What if you want to explore some of your data? If it’s going to take all night, forget about it.

With Apache Spark, you can act on your data in whatever way you want. Want to look for interesting tidbits in your data? You can perform some quick queries. Want to run something you know will take a long time? You can use a batch job. Want to process your data streams in real time? You can do that too.

The biggest advantage of modern programming languages is their use of interactive shells. Sure, Lisp did that back in the ‘60s, but it was a long time before the kind of power to program interactively became available to the average programmer. With Python and Scala you can try out your ideas in real time and develop algorithms iteratively, without the time-consuming write/compile/test/debug cycle.

RDDs
The key to Spark’s flexibility is the Resilient Distributed Datasets, or RDDs. RDDs maintain a lineage of everything that’s done to your data. They’re fine-grained, keeping track of all changes that have been made from other transformations such as
map or join. This means that it’s possible to recover from failures by rebuilding from these transformations (which is why they’re called Resilient Distributed Datasets).

RDDs also represent data in memory, which is a lot faster than always pulling data off of disks—even with SSDs making their way into data centers. While having your data in memory might seem like a recipe for slow performance, Spark uses lazy evaluation, only making transformations on data when you specifically ask for the result. This is why you can get queries so quickly even on very large datasets.

You might have recognized the term “lazy evaluation” from functional programming languages like Haskell. RDDs are only loaded when specific actions produce some kind of output; for example, printing to a text file. You can have a complex query over your data, but it won’t actually be evaluated until you ask for it. And the query might only find a specific subset of your data instead of plowing through the whole thing. This lazy evaluation lets you create complex queries on large datasets without incurring a performance penalty.

RDDs are also immutable, which leads to greater protection against data loss even though they’re in memory. In case of an error, Spark can go back to the last part of an RDD’s lineage and recover from there rather than relying on a checkpoint-based system on a disk.

Spark and Hadoop, Not as Different as You Think
Speaking of disks, you might be wondering whether Spark replaces a Hadoop cluster. That’s really a false dichotomy. Hadoop and Spark work
together. While Spark provides the processing, Hadoop handles the actual storage and resource management. After all, you can’t store data in your memory forever.

With the combination of Spark and Hadoop in the same cluster, you can cut down on a lot of overhead in maintaining different clusters. This combined cluster will give you unlimited scale for Big Data operations.

Who’s Using Spark?
When you have your Big Data cluster in place, you’ll be able to do lots of interesting things. From genome sequencing analysis, to digital advertising to a major credit card company who uses Spark to match thousands of transactions at once
for possible fraud detection. Cisco does something similar with a cloud-based security product to spot possible hacking before it turns into a major data breach. Geneticists use it to match genes to new medicines.

Conclusion
Apache Spark builds on Hadoop and then goes beyond it by adding stream processing capabilities. The MapR distribution is the only one that offers everything you need right out of the box to enable real-time data processing.

For a more in-depth view into how Spark and Hadoop benefit from each other, read chapter four of the free interactive ebook: Getting Started with Apache Spark: From Inception to Production, by James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.