Modular reasoning: why DC standardisation makes sense
Mon 14 Jan 2019 | Jack Pouchet
Data centre builds will increasingly comprise standardised, modular building blocks configured to address special power and thermal requirements
Much like snowflakes, no two data centres are identical. Although they may be home to similar IT equipment, the business needs of the owners and operators of the equipment along with legal requirements, financial considerations and business risk will often drive, if not dictate, certain key elements and properties of a data centre build. Hence the end build may be radically different despite close physical proximity and similar total power requirements.
However, there is growing interest in some form of standardisation for obvious reasons. If handled properly, standardisation can reduce equipment costs, shorten delivery and deployment timelines and simplify service and maintenance.
But never forget, the key tenet of the data centre industry is availability. If standardised equipment and design choices work in service of increasing availability, they will be considered. If customised approaches do more to prevent downtime, they will be the choice. And make no mistake: there are several approaches that can help data centres achieve the desired level of availability.
Even designs that seem to offer some level of standardisation typically come with dozens, if not hundreds, of options. For example, the Uptime Institute Tier rating system and the building designs it represents offer various drawings that look similar. In reality the exact approach taken, and equipment deployed within the build can lead to millions of different combinations and permutations.
Despite the complexity of business requirements, we do see a continuing move to standardisation of build at various sub-assembly levels, such as skidded power, power rooms, power containers/pods/modules and similar approaches to cooling assemblies. Even the “white space” can be prefabricated in off-site manufacturing facilities with all fibre, power, fire detection/suppression, access control, etc., installed and fully tested in a factory environment.
This modular standardisation or component standardisation within equipment is increasingly common, and with good reason. This approach yields shorter production times, reduced costs, reduced complexity, easier service, and when executed properly, a more robust design.
Data centre demand is accelerating on a global basis and although many clients know roughly how much capacity they want to add, and when, they frequently don’t know where until late in the decision-making process.
So, standardisation of build and/or design may be able to facilitate quicker time to move-in and go-live date while also standardising to some degree on operational practices post move-in. As with standardisation in any industry, the benefits include shortened timelines, reduced costs and consistent manufacturing and construction practices that improve quality control.
Generally speaking, it is the large colocation providers, cloud players, hyperscale and large internet providers, and multi-national telecom providers who are driving this demand. This is where virtually all innovation in the data centre space starts, eventually trickling down to smaller environments.
While there is not really resistance to the shouts for standardisation, everyone is being judicious in how they go about it. The industry is moving in fits and starts toward increased power densities and thermal demands at the rack level, and those trends to some degree run counter to the notion of standardisation.
Whether it is direct immersion, liquid-to-the-chip, liquid-to-the-rack or other cooling approaches, this higher-density, hotter configurations do require unique design and build considerations.
Looking forward, we can expect both an increase in data centre build complexity and standardisation. There will be a plethora of standardised, modular building blocks that can be configured to address special power and thermal requirements. We also expect an increase in the number of regionally adapted builds in order to meet the growing data centre demand outside the benign climates of the northern and southern latitudes.
These new data centres will utilise a vast array of digital controls and automation in order to run most efficiently, with adaptive circuitry for resiliency, redundancy, and to free up stranded capacity – all of which will be enabled by extensive use of machine learning and eventually artificial intelligence.
- Jack Pouchet, Vice President, Market Development at Vertiv
Tags:data center data centre modular prefab standardisation
Cloud Mon 14 Jan 2019Software-defined networking: pulling the strings of mul...
Virtual Reality: From queasy concept to versatile mind-goggler
Read More >>
A brief history of SIEM
Read More >>
How to build resilient IT infrastructure
Read More >>
The infrastructure behind Stadia and the next evolution of cloud gaming
Read More >>
Gender equality in tech: its time for businesses to start turning words into ...
Read More >>