Welcome!

@CloudExpo Authors: Pat Romanski, William Schmarzo, Stefana Muller, Elizabeth White, Karthick Viswanathan

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Can Cloud IP Address Be Damaged Goods?

The problem is the scarcity of IP addresses

Elasticity of the cloud computing is a wonderful idea. You can get an instance of networked computer exactly when you need it and you only pay for the time when you actually use it. But while the virtual memory and hard disk is a “clean slate” created specifically for you, the IP address assigned to your instance may have been previously used by a spammer and it could be already on a “spam blacklist”. In an extreme case the whole IP address range can be marked as a source of spam. And this is exactly what happened to Amazon’s EC2: “Go Daddy blocks links to EC2 “.

The problem is the scarcity of IP addresses — Amazon.com doesn’t have enough addresses to give every user a fresh new IP address with the new instance. And the solution to this problem is called Internet Protocol version 6/IPv6:

The very large IPv6 address space supports 2128 (about 3.4×1038) addresses, or approximately 5×1028 (roughly 295) addresses for each of the roughly 6.5 billion (6.5×109) people alive today. In a different perspective, this is 252 addresses for every observable star in the known universe – more than seventy nine billion billion billion times as many addresses as IPv4 (232) supports.

This means that there will be enough IP addresses not only for the elastic clouds but also PDAs, cell phones and other IP based clients. On the other hand it will make the “spam blacklists’ irrelevant since every piece of spam can come from a different IP address: “If the earth were made entirely out of 1 cubic millimeter grains of sand, then you could give a unique IPv6 address to each grain in 300 million planets the size of the earth” .

Read the original blog entry...

More Stories By Roman Stanek

Roman Stanek is a technology visionary who has spent the past fifteen years building world-class technology companies. Currently Founder & CEO of Good Data, which provides collaborative analytics on demand, he previously co-founded first NetBeans, now a part of Sun Microsystems and one of the leading Java IDEs, and then and Systinet, now owned by Hewlett-Packard and the leading SOA Governance platform on the market.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.