Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Blog Feed Post

HGST Joins Open Compute Project To Help Define Best Practices For Datacentre Storage

Leading Storage Provider Works with Facebook and Industry Partners to Help Define and Implement Efficient Tiered Storage Strategies that Reduce Total Cost of Ownership for Corporate and Cloud Datacentres

Open Compute Summit (Booth C7), 17 January 2013- HGST (formerly Hitachi Global Storage Technologies and now a Western Digital company, NASDAQ: WDC) today announced that it has joined the Open Compute Project, an initiative launched by Facebook in 2011, to increase technology efficiencies and reduce the environmental impacts of datacentres.

With the explosion of data resulting from mobile devices, Internet services, social media and business applications, corporate, cloud and big data customers are constantly looking for ways to improve their storage infrastructure costs and their bottom line. The Open Compute Project applies open-source software principles to the hardware industry to drive the development of the most efficient computing infrastructures at the lowest possible cost. HGST will contribute its expertise toward defining storage solutions that deliver the performance and density required while achieving low total cost of ownership (TCO) reflected in metrics such as cost-per-terabyte, watt-per-TB, TB-per-system weight and TB-per-square foot.

"Demand for storage is booming as IT managers strive to handle the avalanche of new data being generated by cloud datacentres, Big Data analytics, social networking, HD video and millions of mobile devices," said Brendan Collins, vice president of product marketing at HGST. "As a strategic drive supplier and consultant to Facebook and in collaboration with the Open Compute Project, we're defining best practices in the storage industry to afford end-users with greater capital savings, operational efficiencies and energy conservation in the datacentre."

The fourth Open Compute Summit is January 16-17, 2013, at the Santa Clara Convention Center - 5001 Great America Parkway in Santa Clara, Calif. As a Summit sponsor, HGST will be showcasing its Ultrastar™ 4TB enterprise-class HDD, the world's first 4TB enterprise-class hard drive, which provides space-efficient, high-performance, low-power storage for traditional enterprises as well as for the explosive big data and cloud/Internet markets where storage density, watt-per-gigabyte and cost-per-GB are critical parameters. The 4TB Ultrastar 7K4000 family raises the bar with a five-year limited warranty and a 2.0 million hours MTBF specification, resulting in a 40 percent lower annualised failure rate (AFR) than enterprise drives rated at 1.2 million hours MTBF. As a leader in enterprise-class SAS SSDs, HGST will also be showcasing its Ultrastar enterprise-class SSDs that meet the performance, capacity, endurance and reliability demands of today's Tier 0, mission-critical datacentre applications.

About HGST
HGST (formerly known as Hitachi Global Storage Technologies or Hitachi GST), a Western Digital company (NASDAQ: WDC), develops advanced hard disk drives, enterprise-class solid state drives, innovative external storage solutions and services used to store, preserve and manage the world's most valued data. Founded by the pioneers of hard drives, HGST provides high-value storage for a broad range of market segments, including Enterprise, Desktop, Mobile Computing, Consumer Electronics and Personal Storage. HGST was established in 2003 and maintains its U.S. headquarters in San Jose, California. For more information, please visit the company's website at http://www.hgst.com.

One GB is equal to one billion bytes, and one TB equals 1,000 GB (one trillion bytes). Actual capacity will vary depending on operating environment and formatting.

###

Contact:
Caroline Sumners
HGST
Office: +44 2392459719
[email protected]

Keira Anderson
Porter Novelli
Office: +44 020 7853 2289
[email protected]

Read the original blog entry...

More Stories By RealWire News Distribution

RealWire is a global news release distribution service specialising in the online media. The RealWire approach focuses on delivering relevant content to the receivers of our client's news releases. As we know that it is only through delivering relevance, that influence can ever be achieved.

CloudEXPO Stories
Transformation Abstract Encryption and privacy in the cloud is a daunting yet essential task for both security practitioners and application developers, especially as applications continue moving to the cloud at an exponential rate. What are some best practices and processes for enterprises to follow that balance both security and ease of use requirements? What technologies are available to empower enterprises with code, data and key protection from cloud providers, system administrators, insiders, government compulsion, and network hackers? Join Ambuj Kumar (CEO, Fortanix) to discuss best practices and technologies for enterprises to securely transition to a multi-cloud hybrid world.
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycle for machine learning in production. From this analysis it builds a framework for seeing where machine learning on a graph can be advantageous.'
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.