Welcome to Qarnot Cloud
Discover our work, our latest news, and tips
to optimize your computation

go to
< Back

KTH trains Deep Learning models on Qarnot platform

by Rémi Bouzel - April 2, 2020 - Testimony
KTH trains Deep Learning models on Qarnot platform

Few months ago we were contacted by students from KTH Royal Institute of Technology to support them in their autonomous car project ( We are glad to support such an innovative project for one of the leading engineering schools in Europe and hope they will win the competition!

You can read their testimony hereunder:

We as part of the KTH Formula Student team are building an autonomously driven driverless open-wheel car to participate in FSG 2020. One of the important aspects of building such a car is the perception of the surroundings, e.g. detection of obstacles, people, roads, etc.

For the purpose of the competition, we needed to detect traffic cones of different colors and sizes. Deep Learning (DL) solutions in perception have seen tremendous success in the recent past. Therefore, we decided to build our own deep learning solution for detecting cones. Our solution is based on the YOLOv3 neural network architecture. To improve performance, this network has to be trained to reliably detect cones meaning the position and color.

The biggest challenge of developing such a solution is to have a large scale computation resource for training and our general-purpose laptops aren’t good enough. This is where we used the services provided by Qarnot.

Qarnot provided us with the most advanced computing hardware, and a good size of RAM. Qarnot enabled us to run our solution in the cloud in a docker container. One advantage is that once the system is set up, we don’t have to worry about new hardware and drivers. The uploading of data was really smooth with their web interface.

A Deep Learning model requires a number of iterations to train the model with different configurations to achieve the best performance. With Qarnot services, these iterations were quick for us as we could train our model faster and it saved a lot of time. Additionally, we are able to access intermediate results and training progress from anywhere and we don’t need to be at a physical location. This makes training more convenient.


Leave a Reply

Your email address will not be published.