Janis Postels

I am a PhD candidate at the Computer Vision Lab at ETH Zurich under the supervision of Luc Van Gool (ETH Zurich) and Federico Tombari (Google).

My research primarliy focuses on uncertainty quantification in deep neural networks. Moreover, I have also worked on neural compression algorithms and generative models for various data modalities.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github  /  LinkedIn

profile photo
Publications

teaser image Implicit Neural Representations for Image Compression
Yannick Struempler, Janis Postels, Ren Yang, Luc Van Gool, Federico Tombari
arXiv, 2021
paper

We propose a neural image compression algorithm by leveraging neural implicit representations and meta-learning.

teaser image Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction
Janis Postels, Mengya Liu, Riccardo Spezialetti, Luc Van Gool, Federico Tombari
International Conference on 3D Vision, 2021
paper
code

We mitigate drawbacks of prior generative models based on normalizing flows for point clouds by introducing a mixture of normalizing flows.

teaser image The OOD Blind Spot of Unsupervised Anomaly Detection
Matthäus Heer, Janis Postels, Xiaoran Chen, Ender Konukoglu, Shadi Albarqouni
Medical Imaging with Deep Learning, 2021
paper

We demonstrate the vulnerability of recent approaches based on VAEs for unsupervised anomaly detection in medical images.

On the Practicality of Deterministic Epistemic Uncertainty
Janis Postels, Mattia Segu, Tao Sun, Luc Van Gool, Fisher Yu, Federico Tombari
arXiv, 2021
paper

We show that the uncertainty predicted by a recent family of methods for uncertainty estimation which treat the weights of a neural network deterministically is poorly calibrated.

teaser image Variational Transformer Networks for Layout Generation
Diego Martin Arroyo, Janis Postels, Federico Tombari
Conference on Computer Vision and Pattern Recognition, 2021
paper

We introduce a generative model for layouts based on VAEs and attention layers and demonstrate its strong inductive bias.

The Hidden Uncertainty in a Neural Networks Activations
Janis Postels, Hermann Blum, Yannick Strümpler, Cesar Cadena, Roland Siegwart, Luc Van Gool, Federico Tombari
arXiv, 2020
paper

We demonstrate that the uncertainty of a neural network can be quantified using the distribution of its hidden representations.

teaser image Sampling-free epistemic uncertainty estimation using approximated variance propagation
Janis Postels, Francesco Ferroni, Huseyin Coskun, Nassir Navab, Federico Tombari
International Conference on Computer Vision (ORAL), 2019
paper code

We propose a method to estimate the uncertainty of Bayesian neural networks in a single forward pass by applying error propagation.

Link to Template