After development of its immersed cooling system for maritime-based data centers, Asperitas opted to change plans and instead focus on deploying its solution for a greater global impact. Now, with a focus on terra firma, this clean-tech scale-up wants to paint the data center industry green.
Data centers are a critical feature of the current digital economy, providing vital services to government, industry and the rest of society. In this era of big tech and big data, the demand for computing has never been greater. To perform these functions, however, these facilities require vast amounts of electricity to operate, not only to power the IT, but also to keep it cool. From expensive refrigeration to the buzz of noisy blowing fans, there are numerous techniques utilized for data center cooling. The problem: they’re terribly inefficient. This is where Asperitas, a Haarlem-based scale-up, looks to offer its alternative of what it refers to as immersed computing.
Back in 2014, Asperitas co-founders, CEO Rolf Brink and CFO Markus Mandemaker, worked to develop a similar core technology of immersion cooling for small, modular data centers at sea. Of course, building these types of facilities on ocean-going vessels is certainly no small feat. First, there are a number of logistical issues like constant movement, temperature changes and high humidity – all of which can wreak havoc on electronics. But, perhaps the biggest hindrance is that maritime digitization is several years behind that found on land.

By 2015, the founders decided it was time to change course, as they believed their immersion-based cooling system could be adapted to the highest level of data centers. “They realized that the same solution, but in a different package, would have a tremendous impact on data centers around the world and by focusing on land, they expected a much shorter time to market,” explains Maikel Bouricius, marketing manager of Asperitas. “They changed their focus toward a solution specifically designed for the highest-level data centers, like enterprise IT, cloud and HPC.”
Immersion cooling
Data hubs utilize a number of methods to keep components at optimal temperatures. Traditional data centers opt to cool their IT by moving air. To do this, water is chilled in a cooling tower, to about 28 °C, and is then pumped through a heat exchanger, where blowing air comes through and is cooled. The water, now 32-34 °C, is either discarded or sometimes gets pumped back to the cooling tower to be chilled and reused. The cooled air is forced through the servers, to lower the temperature of the electronics.
In terms of efficiency, this process is relatively poor. The energy needed to continuously cool the water, as well as power industrial airflow over the servers can add significantly to the total power consumption, not to mention the maintenance that’s needed to keep the system running. Also, air cooling is not sufficient to cool down the high-performance CPUs, which can be 300 W each, let alone upcoming alternatives like FPGAs, which could potentially use even more energy. “By skipping the entire airflow step, we can save big on power consumption,” highlights CTO Merijn Koster.
The Asperitas immersed computing system works in a substantially different manner. First and foremost, aside from the lid to access the unit, the system has no moving parts. IT-filled cassettes, packed with CPUs or GPUs – depending on server requirements – are drowned in a dielectric liquid, which is optimized by Royal Dutch Shell for this specific application. The liquid is produced by a gas-to-liquid process and the IT is cooled through the process of natural convection. By submerging the electronics, all the heat that is generated by the IT is captured in the liquid. In fact, this immersion method is capable of absorbing 1500 times more heat energy than air can at the same volumes and temperatures.
Another standout feature of the system’s efficiency is its ability to create additional usable energy as a byproduct. Inside the unit is a specially designed convection drive, on each side. As water flows through this, it’s used to cool the oil around it. Instead of requiring chilled water, of around 28 °C, like in air systems, the Asperitas solution operates with a full load and at full speed with a standard maximum water input temperature of around 45 °C – meaning there’s no need for a chiller to cool the water. Making the system climate independent, even with high outdoor temperatures found in datacenter regions like Houston and Singapore.
More interesting, however, is that the output water temperature is between 55-60 °C – the same temperature that’s used for heating houses. “60 °C water is what 90 percent of the Netherlands uses to warm their house. It’s also used by hotels to heat pools and guest rooms. There are a number of practical applications,” exclaims Koster. “This is a real game-changer, sort of the holy grail for data centers, especially when you consider the current halt on new facilities opening in Amsterdam.” With heat reuse as one of its focal points, Asperitas expects that new policies and regulations could further drive the shift toward sustainable systems, such as theirs.

Data center in a box
Normally, cooling systems would require a very large, complex and expensive build-out. But this is another way Asperitas’ solution contrasts to conventional methods. Coming in at about 2 meters wide, 1 meter high and about a half a meter deep, this self-contained unit can fit within almost any existing infrastructure, even in a closet. Essentially, all that’s needed is a water line and electricity. “Our solution drastically simplifies data center design. There’s no forced cooling, no adiabatic cooling, none of that. Simply a dry cooler, some pipes and power,” describes Koster. “Our solution is essentially a data center in a box.”
What this system lacks in size, it aims to make up for in performance. In fact, its ability for high-density requirements is perhaps one of its strengths. In conventional air systems, each 19-inch cassette, or U, can service around 4 GPUs, in terms of maximum density – as space is needed around all of the components and boards to allow for airflow. In the liquid system, this stream of air is no longer needed, meaning the density of components is limited only to the amount of physical space.
Koster: “By optimizing heat sinks and enhancing the layout and design, we’ve been able to fit as many as 12 GPUs in a single 15-inch U-frame. That’s over 1200 W. If you add it up, that’s 288 GPUs on a single rack, running at around 29 kW, which is one of the best examples of what you can do with immersion.”
Asperitas’ solution can also be attractive for applications that require a high utilization of a single processor, like fintech or specific HPC. In this system of immersion cooling, there’s no risk of overheating, therefore every single CPU or GPU can even run beyond its specified maximum performance. Bouricius: “We’ve been conducting tests on an AMD Epyc CPU showing clock rates of up to 3.7 MHz, which is significantly higher than the highest standard spec rate of 2.9 MHz.”

Mainstream adoption
So far, Asperitas has maintained a focus on Europe, having already deployed several projects in the Netherlands, Germany, France and the UK. But going forward, the clean-tech scale-up is thinking global. “Our mission is to enable data centers that are sustainable and high performing, anywhere in the world they’re needed,” voices Bouricius. “Not only in Iceland or Sweden, where energy is cheap and available, but also in Mexico or Vietnam.”
Koster: “Hand in hand with this international focus, our aim is to get immersion cooling into the mainstream – both on the data center and IT manufacturer side. The system works great, but it can work even better with more optimization.”