Deep Single-Image Relighting Based on IntrinsicDecomposition

Author: Yixiong Yang

Supervisors: Maria Vanrell & Hassan A. Sial

Presentation time: 9:15

Virtual Room: 3.2 | Live presentation URL 

Abstract:

Scene relighting from a single image pursuits to solve the problem of generating a new image of the same scene under a different objective light provided as input. In this project, the aim is to explore different image-to-image neural networks and some physical properties to be used to solve single-image relighting. We explore how the network training is affected by 3 physical constraints: (a) the intrinsic decomposition of shading and reflectance of the input image; (b) the reflectance consistency of an image set of the same scene under different light conditions; and (c) the explicit estimation of the input light properties. All the proposed architectures have been trained on a new version of the synthetic SID dataset and the quantitative evaluation does not allow us to state clear conclusions yet, since we need further analysis of the results and test on further datasets. However, from a qualitative point of view our approach is showing very promising results, thus it seems that adding physical constraints makes the performance of the networks more superior than a single encoder-decoder architecture.

One thought on “Deep Single-Image Relighting Based on IntrinsicDecomposition

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>