Rémi Bouzel
Scientific Director

How CATALYST helps Qarnot distributed Data Center to increase its flexibility

June 8, 2020 - Green IT

Qarnot has been involved in the Catalyst project to investigate how IT workload management can be used to control energy consumption, and therefore heat production.

Qarnot distributed data center

Indeed, at Qarnot we use the computing jobs to produce heat according to users’ demand in regular housing buildings. Qarnot owns and operates a fully distributed data center, that could be assimilated to several edge data centers where heat is fully reused. 

The picture above presents one of the housing buildings in Bordeaux equipped with QH1 computing heaters, which is one of Qarnot’s edge distributed DC.

Energy consumption in data centers is obviously linked to servers activity, firstly relating directly to their usage, and secondly to keep the hardware in acceptable environmental conditions, i.e. mainly low temperature. Indeed, electricity consumption and waste heat production are directly related. This has been very well described by Sadi CARNOT in his “heat-engine” theory that can easily be generalized to the IT industry. The French physicist “2nd law of thermodynamics” applied to ICT is presented in the following figure.

In other words, if a server is running, especially CPU intensive workload, heat is produced. Almost all electricity becomes heat, which is a waste for IT industry. As a matter of fact at the server level, managing heat or power consumption can be done using the exact same techniques, by managing IT workload execution.

The CATALYST project

The goal of the CATALYST project is to regionally manage electrical power consumption of data centers to adapt to network constraints such as electricity demand, heat demand, and renewable energy available. The first possibility investigated was to reduce servers’ usage intensity, and the second possibility was to just stop the IT activity.

To achieve this goal, it has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 768739.

The servers usage intensity is important but cannot solve the problem totally. If the DC has the control over the hardware, it is possible to adjust the CPUs’ frequency and reduce performances. It is working while maintaining a limited computing capacity, but acceptance highly depends on the actual workload and SLAs agreements. It is worthwhile to notice that even idle, a server power consumption is about 30% of its maximum capacity. As this threshold is important, leveraging actual power consumption through usage intensity remains limited and can be used in limited cases. 

We then investigated how to handle power consumption by stopping IT servers. We deployed our pilot in one housing and used priority level to just kill low priority job that could be seen as spot instances. Using this technique, it is possible to announce a certain amount of power consumption that the site (one of the edge data center) can actively reduce. This is working fine but is quite brutal. As for Qarnot, the problem is twofold, first because the server is no longer available, and second because heat is no longer produced and therefore inhabitants comfort could be impacted… This solution works and is quite reactive but the drawbacks are very important to be applied in every situations.

Migrate IT workload from a DC to an other

To mitigate drawback but actually act on DCs power consumption, CATALYST members imagined and developed a solution to migrate the IT workload from a DC_1 (sender) to a remote DC_2 (receiver), within a DC federation. Using this technique, it is possible to reduce power consumption in DC_1 and move the corresponding power consumption (and also the heat produced) in DC_2. 

This component is the CATALYST Migration Controller. The goal of this component is to actually leverage the Virtual Machines (VMs) location to master the energy consumption. It is able to generate a secured connection between DCs and perform a live migration of the VMs. While the VMs have been transferred, it maintains a seamless control for the client. The global deployment diagram is presented hereafter. 

This feature is integrated in a marketplace shared among the DCs participating in the federation. This marketplace is responsible for trading IT workload that needs to be relocated and DCs that can propose servers available, and the corresponding energy. To fully benefit from this migration, it is mandatory to actually shut down the sending server otherwise idle power consumption still remains. This means that it necessitates to have the control over hardware, which is not possible in colocation DCs for example. 

Use cases

This relocation could be used in many situations. For example, due to a large electricity demand, the DC_1 is asked to reduce its power consumption. To do so, DC_1 can propose on the marketplace IT workload to be relocated. As for the receiving DCs, they can propose hardware capacity for various reasons, either extra renewable energy, or heat demand in heating district network for example. 

After acceptance, the Migration Controller opens a secure connection, moves the IT workload and also tracks the migration in a blockchain. This tracking is used for resources use payment agreed in the marketplace. Then this component is also responsible to move the IT load back to original DC_1. More technical details about this component in the corresponding deliverable “D3.3: Federated DCs Migration Controller”.

Regarding exploitation of this kind of mechanisms, we can imagine new business models emerging mixing IT markets in association with energy markets where IT workload becomes a commodity as electricity or heat for that matter. For Qarnot computing, the project outcomes will be used for edge purposes, at the building level either for local smart grid energy management such as demand response, or balance between edge sites.

Finally, a recent blog article from Google reports that they are investigating such techniques that are being developed in our CATALYST project. 

 

 

Written by Nicolas Sainthérant

Share on networks