Welcome!

@CloudExpo Authors: Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White, Yeshim Deniz

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Is Fabric Computing the Future of Cloud?

Fabric Has Little to Do With Our Often Used Manufacturing or Supply Chain Analogies

The term fabric computing is gaining rapid popularity, but currently mostly within the hardware community. In fact, according to a recent report, over 50% of attendees at the recent Datacenter Summit had implemented, or are in the process of implementing, fabric computing. Time to take a look at what fabric computing means for software and for (cloud) computing as a whole.

Depending on which dictionary you choose, you can find anywhere between two and seven meanings for "fabric." Etymology-wise, it comes from the French fabrique and the Latin fabricare, and the Dutch Fabriek actually means factory. But in an IT context, fabric has little to do with our often used manufacturing or supply chain analogies; instead it actually relates much closer to fabric in its meaning of cloth, a material produced (fabricated) by weaving fibers.

If we check our handy Wikipedia for fabric computing, we get:

Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance.[1]

Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects ...



In the context of data centers it means a move from having distinct boxes for handling storage, network and processing towards a fabric where these functions are much more intertwined or even integrated. Most people started to note the move to fabric or unified computing when Cisco started to include servers inside their switches, which they did partly in response to HP including more and more switches in their server deals. Cisco's UCS (Unified Computing System), and its bigger sibling, VCE, are the first hardware examples of this trend (although inside the box you can still distinguish the original components).

One reason to move to such a fabric design is that by moving data, network and compute closer together (integrating them) you can improve performance. Juniper's recent QFabric architecture announcement is another similar example. But, the idea of closer integration of data, processing, and communication is actually much older. In some respects, we may even conclude IT is coming full circle with this trend.

Let me explain.

Many years ago I spoke to Professor Scheer, founder of IDS Scheer and a pioneer in the field of Business Process Management (BPM). (Disclosure: years later IDS Scheer became part of my former employer: Software AG.) He spoke about how - in the old days of IT - data and logic were seen as one. Literally! If - while walking with your stack of punch cards to the computer room (back then it was a computer the size of a room, not a room with a computer in it) - you dropped your stack of punch cards, both data and logic would be in one pile on the floor. You would spend the rest of your afternoon sorting them again. There was just one stack: first the processing/algorithm logic, and then the data. Scheer's point was that just like we figured out after a while that data did not belong there and we moved it to its own place (typically a relational database), we should now separate the process flow instructions from the algorithms and move these to a workflow process engine (preferably of course his BPM engine). All valid and true - at that time.

But not long after, Object Oriented programming became the norm, and we started to move data back with the logic that understood how to handle that data, and treat them as objects. This of course created a new issue of having these objects perform in an even more remotely acceptable way, as we used relational databases to store or persist the data inside these objects. You could compare this to disassembling your car every night into its original pieces in order to put it in your garage. Over the years the industry figured out how to do this better,in part by creating new databases which design-wise looked remarkably similar to the (hierarchical) databases we used back in the day of punch cards.


And now , under the new shiny name of fabric computing we are moving all these processes back in the same physical box.

But this is not the whole story -- there is another revolution happening. As an industry we are moving from using dedicated hardware for specialized tasks to generic hardware with specialized software instead. For example, you might use a software virtualization layer to simply emulate a certain piece of specific hardware.

Or, look at a firewall: traditionally it was a piece of dedicated hardware built to do one thing (keeping non-allowed traffic out). Today, most firewalls are software-based. We use a generic processor to take care of that task. And we're seeing this trend unfold with more equipment in the data center. Even switches, load balancers and network-attached storage are becoming software-based (virtual appliance seems to be the preferred marketing buzzword for this trend).

Using software is more efficient than having loads of dedicated hardware, and we can't ignore the fact that software, because of its completely different economic and management characteristics, has numerous inherent advantages over hardware. For example, you can copy, change, delete and distribute software, all remotely, without having to leave your seat, and even do automatically. You'd need some pretty advanced robots to do that with hardware (if it could be done today).

So how do these two trends relate to cloud computing?
By combining the idea of moving stuff that needs to work together closer together (the idea of fabric) and the idea of doing that by using software instead of hardware (which gives us the economics and manageability of software) we can create higher performance, lower cost and easier to manage clouds.

Virtualization has been on a similar path. First we virtualized servers, then storage and networking, but all remained in their separate silos. Now we are virtualizing all of it in the same "fabric." This means that managing the entire stack gets simpler, with one tool to define it, make it work and monitor it. And that's something that should make any IT pro smile.

In my next post, I'll share my thoughts on why I think this approach has the power to change IT as we know it, based on some of my own epiphanies.

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

CloudEXPO Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O'Reilly author.
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small businesses.