top of page

Séminaire CeRVIM, Adrien Gruson, 23 avril 2026, 10h, PLT-1120

  • 3177416
  • 12 avr.
  • 2 min de lecture

Dernière mise à jour : il y a 5 jours

Embedding Deep Learning Inside Rendering Pipelines


Professeur adjoint

Département de génie logiciel et TI

École de technologie supérieure (ÉTS)

Jeudi, le 23 avril 2026

Heure : 10h00, Local : PLT-1120


Résumé de la présentation :

Light transport simulation is a fundamental class of methods for producing controllable and physically accurate photorealistic images. At their core, these methods simulate the propagation of light through a 3D scene, where each interaction obeys physical laws. However, they are expensive to run and can produce high variance, especially in complex lighting scenarios. Recently, with the rise of machine learning, there has been a push to bypass light transport simulation entirely and directly produce the output image. This comes at the cost of difficult generalization, costly training, and limited control over the output. Instead, in this presentation, I will introduce my current research on preserving the accuracy of light transport algorithms while accelerating them using deep learning. In particular, I will describe neural parameterization methods that enable more adaptive strategies for rendering photorealistic images [Josse et al., SIGGRAPH Asia 2025; Benoist et al., Eurographics 2026]. I will also show that neural primitives, embedded directly inside the rendering pipeline, can make existing non-neural methods more practical while providing guarantees on the output quality [Shah et al., EGSR 2024].


Bio :

Adrien Gruson is an Assistant Professor at École de technologie supérieure (ÉTS) in Montréal, where he leads research on physically based and differentiable rendering. He holds a Ph.D. from Université de Rennes 1 (France, 2015) and completed postdoctoral appointments at the University of Tokyo with Prof. Toshiya Hachisuka and at McGill University with Prof. Derek Nowrouzezahrai. His work focuses on hybrid algorithms that embed data-driven components into principled light transport simulation, as an alternative to end-to-end deep learning approaches. More broadly, his research explores how new scene representations (3D Gaussian splatting, implicit surfaces) and differentiable rendering can be leveraged for practical inverse problems such as monocular face reconstruction as well as for creative tools.


La présentation sera donnée en anglais et les diapos seront en anglais.

The presentation will be given in English and the slides will be in English.


Bienvenue à toutes et à tous !


 
 

Posts récents

Voir tout
bottom of page