CS&E Colloquium: Mohamed Elgharib

This week's speaker, Mohamed Elgharib (Max Planck Institute for Informatics), will be giving a talk titled, "Neural Reconstruction and Rendering: An Implicit Perspective".

Abstract

Digitising the world around us is of increasing importance, with several applications in extended reality, movie and media production, telecommunications, video games, medicine, robotics, and many more. The vast majority of existing works use explicit means of representing the scenes, such as meshes, point clouds, and so on. While these representations produce very good results, processing them with deep learning still has limitations in 3D reconstruction and rendering. Only recently a new class of scene representation emerged, known as implicits. Unlike explicits, implicits are represented via continuous fields. They are 3D by design and formulated using neural networks. This makes them very suitable for neural reconstruction and rendering. In this talk, I am going to discuss implicit scene representations for the important problem of relighting. This is a challenging problem as it requires careful extraction and manipulation of scene intrinsics. We will show how implicit scene representations bring important benefits to the state of the art, such as producing 3D-consistent relightings. We will also show how the continuous nature of implicits allow editing the full image, such as relighting the full human head including the scalp’s hair. We believe that implicit scene representations can positively impact neural reconstruction and rendering. This will allow us to move fewer steps closer to the ultimate goal of fully digitising our world.


Biography

Mohamed Elgharib is a Research Group Leader at the Max Planck Institute for Informatics. His areas of expertise are computer vision, computer graphics, machine learning and artificial intelligence. His work is on building digital models of our world to allow novel applications in extended reality, VR/AR and others. Topics of interest include 3D scene modeling and reconstruction, deep generative modeling, neural rendering, 3D pose estimation, relighting and others. His work usually includes a heavy machine and deep learning component through supervised, self-supervised or unsupervised learning. He worked with different types of data, including monocular RGB, multiview RGB, audio, depth, and even with biologically inspired and neuromorphic based sensors such as event cameras. Mohamed Elgharib co-authored more than 40 peer-reviewed publications, has three granted US patents and has collaborated with a wide spectrum of academic and industrial institutes. Some of Mohamed’s publications were featured in media outlets such as BBC News and MIT News, others won awards such as the Best Paper Award Honourable Mention in BMVC 2022, and a start-up was largely inspired by one of his publications.

Category
Start date
Monday, April 24, 2023, 11:15 a.m.
End date
Monday, April 24, 2023, 12:15 p.m.
Location

Share