Ariane Robineau
Head of 3D
In charge of 3D
QBx
Qarnot's QBx is a computer cluster optimized for both high performance computing and energy efficiency. Each module integrates 12 to 24 processors whose intensive computing heats water: up to 95% of computer waste heat is reused thanks to this technology.
Render farm
Launch easily your 3D renderings on Render, our dedicated platform.

Create your own 3D model with photogrammetry

August 13, 2020 - 3D

3D meshing is fascinating. You can think about the dozens of hours needed to design 3D characters or assets for the entertainment industry, driving them so close to reality that it’s sometimes hard to tell illusion from reality. But you can also think about architecture and engineering where 3D models are absolutely necessary to design, plan or analyze the structure of the final product. Archeology, land surveying, insurances and forensics also have their interest in being able to reproduce the reality for conservation, observation or analysis purposes.

What if it was possible to easily and quickly transform real subjects into almost perfect (and more importantly, accurate) 3D meshes, and this from a simple set of pictures?

This is what photogrammetry can do with the appropriate equipment! 
Did you know that Infinity Ward used this method for Call of Duty: Modern Warfare to model assets such as dead corpses (who are in reality dressed up developers from the team)?

In this article, we’ll introduce photogrammetry and the details of the method used to create a 3D model. We’ll then show you how to transform an object of your daily life in an incredibly realistic 3D model.

All you will need is:

  • - a Qarnot account;
  • - a camera (smartphone or not, it depends on what you have);
  • - the subject you want to model.

 

 

 

Photogrammetry

History of photogrammetry

Photogrammetry (from the greek "photo-" (φωτοζ=light); "-gramma" (γραμμα=recording, drawing, carving) and "métron" (μετρον=measure)) dates back from the middle of the XIXth century. Aimé Laussedat, pioneer of the discipline, elaborated it starting from 1849 when he first applied it to the Invalides’ façade in Paris. Nadar, the famous french aerial photographer, also understood very well the interest of the technique from his balloon’s point-of-vue in the 1860s, and helped develop it further.

 

Left: Aimé Laussedat. Right: Gaspard-Félix Tournachon also known as Nadar.

 


Though it was invented in France, the method became very popular in Germany, and was developed and industrialized quickly. With the development of aviation between the two world wars, photogrammetry became a key work to create accurate mapping of geographic areas for both civilian and military applications. With the advent of computer vision, photogrammetry finally evolved to what we know today.

 

The method

Photogrammetry needs a dataset of pictures of a given subject to be able to create a 3D mesh from it. Knowing the camera’s features (especially the optics’ details) can in some cases be important to optimize the computation, to better understand the pictures’ details (depth, deformations, etc).

In the Meshroom software, the first step of the photogrammetry process is to apply the SIFT algorithm (Scale-Invariant Feature Transform) to the pictures dataset. This algorithm analyses the images and detects common features, or keypoints, that are independent of rotation, translation and scale. The next step is to find images that are looking to the same areas of the subject to optimize the rest of the computations. Then comes the Feature matching step. It takes in input two images and compare them to find common points, and iterates on all the dataset.

An example of the feature matching result.

 

Structure from Motion, the following step, is very important: it uses the results of Feature matching to create the 3D cloud points. This is where the camera’s optic details are important as camera’s positions are deducted from the dataset, and triangulation is used between image pairs and cameras to find the 3D position of each feature.

 

 

 

Once each feature is pinpointed and given a set of 3D coordinates, Depth-Map estimation is performed in parallel on all cameras that were resolved thanks to the structure from motion step. This algorithm estimates the distance of the surfaces from the camera, creating a Depth Map (also called Z-Buffer) in which is stored an estimation of the depth value of each pixel in shades of gray.

 

A Depth map of several 3D objects.

 

Depth Maps are filtered and after completing the 3D cloud points with the depth maps’ data, a Meshing step is applied to the whole 3D points cloud to create a dense geometric surface representation of the subject. The result is once again filtered and Texturing happens: each pixel is given an average color value calculated from each picture it comes from. UV maps (2D textures) are created from these data, making patterns that can then be mapped on the subject for 3D visualization.

 

Meshing and texturing on a little cow. (Credits to metalbyexample)

 

A cleaning of the 3D model can then be done through a modelling software to remove the possible imprecisions, and the model is then available!

You can find a very complete and detailed explanation about the photogrammetry pipeline on the Alice Vision website if you want to get more information on some of the steps mentioned above.

Meshroom on Qarnot tutorial

This is the part where you finally get to use the method! You’ll need to follow a few steps to be able to compute your first 3D model from simple photos.

Creating your own dataset

Let’s start with the fun part: photo shooting! This step is very important as the quality of your final 3D model directly depends on your dataset’s accuracy. You’ll need a camera, one from a smartphone can be enough. The model you will work on shouldn’t be too shiny nor be made of glass, as this can be tricky for the software. Choose a place where your light is diffuse, once again to avoid bright spots on the model. When you’re all setup, take a series of photos of your model (a minimum of 40 is a good start, but there is no limit), try rotating around it and make sure that you photograph all the details but also the general picture.

This is an example of the way we did it on a 3D printed model of our computing boiler: the CuteB1 (inspired from the original QB1)

Our photogrammetry setup

If your camera isn’t good enough or if you don’t have the time, our CuteB1 dataset is available here.

Using Qarnot to run Meshroom

Once this is done, the next step will be to create a Qarnot account. We offer 50 hours of computation on your first subscription, this should be more than enough for this example.

Now that you have everything, let’s start the computation!

  • - First, create on your computer a Qarnot_photogrammetry_example folder. In this location, create a dataset folder that will include a photogrammetry folder and import all your pictures inside. This method paired with the following script will synchronize automatically all your dataset's pictures with the Qarnot platform.

Once this is done, let’s use the Qarnot Python SDK to launch the distributed calculation.

  • - Save the following script as run.py in your Qarnot_photogrammetry_example folder. In this script, you need to enter your Qarnot Token linked to your account (you can find it here) to use our platform.

  • - In the Qarnot_photogrammetry_example folder, follow these steps to set up a Python virtual environment. Then, you can run the Python script from your terminal by typing chmod +x run.py (under Linux) and then ./run.py to run it.

You can then see the tasks details on Tasq by clicking on your task. When it’s finished, the results (a set of textures and a 3D model under the .obj format) are accessible in the bucket tab of Tasq.

To visualize it you can use several 3D visualization softwares. We used Blender as it is open-source and very performant. In Blender, open File > Import > (your .obj file), and clean it a bit if needed. 
Here is the result!

What's next?

The process followed for this “photogrammetrisation” was very simple and didn’t involve parallel computing. Parallelisation is something that would be possible to apply to Meshroom, notably with the Depth-Map estimation step. The Alice Vision framework is open source and entirely nodal, all the resources are available if you want to go for it!

We hope you enjoyed this tutorial! Should you have any question(s) or if you wish to use our platform for heavier computations (we can provide top of the art resources on demand), don’t hesitate to contact us.

 

 

Share on networks