BD: We are currently here at FMX with the team from Mikros that created Alicevision. A few days ago, they released Alicevision’s main program that is Meshroom, a full-featured stand-alone photogrammetry solution. Why are you releasing it as open-source?
Benoit Maujean: Since the beginning of the project (2010), we have wanted it to be open-source and able to be integrated within other software. We chose the Mozilla Public License v2, so if you make modifications to the public code, you push it back to the repository. But we want to maximize the partnership with both academic institutes and industry without any problem.
BD: It seems that your program is really good for educational use because the user has the option of simply pushing a button, but you also literally expose the different steps of the internal workflow by making use of a node graph. The user can even optimize this way for example by using a custom base mesh. Did you have educational usage in mind?
Fabien Castan: Yes, on the website we try to explain the high-level notion for each step and make reference to the publications we are using. We are working with four different research labs. They have the ambition to be able to create student projects on top of it because all the relevant stuff is implemented and they can combine it to create some specific research environment.
Benoit Maujean: They can even use Matlab to test their algorithms. It might not be optimized, but they can test their algorithms in the context of the global pipeline. They can add their nodes in Meshroom and see how the algorithm can fit with the rest of the pipeline because everything else is constant before and after. We are doing that with our PHD, who is testing algorithms regarding the analysis of the materials and lighting of the scene. He is able to test it in the context of global reconstruction.
Fabien Castan: And it is also the idea of having a tweakable pipeline. Advanced users have access to low-level parameters to see how the program reacts and analyse it. And you can even interact with it directly in the UI and compare the results directly without going to the command line.
Yann Lanthony: It is important to say that at first we did not have Meshroom as it is now. When we were working on an algorithm and wanted to test the impact of parameters, we had to manually write command lines, make sure to write an output, change parameters, etc. You know you get mixed up with all those terminals on your screen, you get confused which one is which. When we first had the really very first version of Meshroom which basically was just the graph editor for ourselves it changed things rapidly. We said goodbye to the terminals. Meshroom is a development tool for us as well.
BD: For a casual user there is just one button and for the experts there is the extremely finely grained control of every step and parameter. TDs and larger studios seem to be in the target market for this tool.
Fabien Castan: We are TDs at Mikros, so that is our everyday job. People in production use the one button-click, some more advanced users tweak things on their own. But when people have trouble on a dataset that does not work as expected, we can analyse the console and analyse how Meshroom reacts. That allows us to improve the algorithms by just seeing that there is a problem on a certain part and that a parameter is critical, because on this dataset it reacts in a certain way and changes the impact. So, for instance, we might want to compute this setting automatically for this kind of use-case.
Yann Lanthony: A recent example of this is a series of shots of actors’ faces on set. The one-button-click result was not so great because of the subtle movements of the actors’ faces during the shooting, which resulted in offsets during reconstruction. We were able to adjust some parameters to make sure we got rid of a maximum of those outliers by applying advanced denoising.
Benoit Maujean: Regarding the TDs – the fact that it is all Python behind the scenes is very useful. The best example is the integration with render farms. That was one of the first things that was expected, the fact that you can use other workstations around you or the render farm computers to parallelize reconstruction.
Fabien Castan: The submission to render farms can be customized by TDs as well because there is an abstraction layer. We already have people working with us who are using different render farms.
BD: Meshroom currently takes photos as sources, and in the future it will also ingest LIDAR data. How about video?
Yann Lanthony: We currently have a software that is able to extract interesting keyframes from videos, such as frames that are not too close together and not too blurry. It is not integrated into Meshroom yet, so we currently run this process in the background. It is not yet available in the high-level UI. It is ready, but it is not yet fully integrated.
BD: In the presentation you showed an example use-case where you first 3D-scanned a movie set and then used that data for the camera solve in a tracking shot. The result was a shot where the camera moved through the point cloud of the previous reconstruction. A process that opens up a lot of possibilities.
Fabien Castan: Yes, that is pretty interesting, but not production-ready yet. Currently, the part that is production-ready is the reconstruction of the movie set. But we are working on both an offline solution for high-quality camera tracking as well as realtime camera tracking to make augmented reality on the shooting set for previz. The offline camera tracking can rely on natural features and markers, but the previz solution requires markers. We have developed a new marker type called CCTag which provides a nice robustness to challenging conditions regarding motion blur, depth of field, low lighting, and partial occlusions.
BD: You have close partnerships with academia ...
Benoit Maujean: Yes, we are working with people in research labs, but the objective is also to test in the production context. To be robust and to be used in the everyday life of movie production.
Fabien Castan: Currently, we have partnerships with academics and hardware manufacturers, but we are also very open and interested in collaborations with other industries. Photogrammetry is not limited to the postproduction area but can be used for many things. It is also the foundation of augmented reality, both in realtime and offline.
This interview was first released for the magazine Digital Production.