The Stack Archive

Google envisages new hard disk format design for data centres

Thu 25 Feb 2016

Google has made a call for technology manufacturers to consider developing new hard drives, intended primarily for data centre use, which abandon the traditional 3.5” dimension format in favour of taller designs.

In a research paper [PDF], Eric Brewer – a professor at UC Berkeley and VP of Infrastructure at Google – argues for a revised form factor, contending that the individual hard disk is always part of a collective in data centre infrastructure, and that a larger and more capacious unit would reduce points of failure and bring in new opportunities to optimise HDD design and abandon the precepts and limitations which led to the current standard.

‘The current 3.5” HDD geometry was adopted for historic reasons – its size inherited from the PC floppy disk. An alternative form factor should yield a better TCO overall. Changing the form factor is a long term process that requires a broad discussion, but we believe it should be considered. Although we could spec our own form factor (with high volume), the underlying issues extend beyond Google, and developing new solutions together will better serve the whole industry, especially once standardized.’

Changing the circumference of the platter is impractical, the paper argues, since greater width will increase available storage but lower Input/Output Operations Per Second (IOPS) due to the longer journey the read head must make to the next sector. Therefore Brewer’s group proposes increasing the height of the standard HDD, currently established at an average of one inch for 3.5” disks and 15mm for 2.5” drives, in order to store more platters per HDD – an economical approach from the point of view of packaging, optimal use of printed circuit boards and the drive’s motor actuator.

Reducing the width of the platter naturally increases IOPS via reduced seek time at a cost of storage capacity, and the paper argues that the correct solution could be a combination both of the ‘stacked higher’ model proposed, and a new, smaller HDD with improved seek as a rapid-response buffer between the network and the less-accessed storage in a data centre.

The paper also proposes the possibility that current Bit Error Rate on commercial HDDs is set redundantly high for the purposes of data centre usage, considering the multiple mirroring of data:

‘[Since] data of value is never just on one disk, the bit error rate (BER) for a single disk could actually be orders of magnitude higher (i.e. lose more bits) than the current target of 1 in 10 15 , assuming that we can trade off that error rate (at a fixed TCO) for something else, such as capacity or better tail latency.’

Though massive increases in areal density, as predicted in Kryder’s Law, quickly made their dimensions redundant, IBM’s 1962 Model 1311 hard disk measured a whopping 14 inches in an enclosure comparable in size to a washing machine. The first 5.25-inch disk was created by Seagate in 1980, though the 5.25” format ended its life with the Quantum Bigfoot in the late 1990s.


data centre Google news research storage
Send us a correction about this article Send us a news tip

Do NOT follow this link or you will be banned from the site!