I am a PhD student in machine learning under supervision of Max Welling at the QUVA lab of the University of Amsterdam, as well as a Research Associate at Qualcomm AI Research.
Previously, I obtained masters degrees in theoretical physics in Cambridge and in AI in Amsterdam. I wrote my thesis on Causal Imitation Learning as visiting scholar at the Robotic AI and Learning Lab at UC Berkeley supervised by Sergey Levine.
My reserach interests are in deep learning with strong inductive priors.
Some of these priors are about symmetries. I’ve built neural networks that perform message passing on meshes, tackling the gauge symmetry of how to orient the convolutional kernel. This work has been applied to model blood flow through arteries. Furthermore, I’ve worked on the symmetries of graphs and lattice quantum field theory.
Other priors are about the causal structure of the data. Some problems of imitation learning and reinforcement learning, can not be solved with infinite data, but only by incorporating the prior knowledge of the causal structure. Sometimes, this involves learning the causal representation of variables and mechanisms from data.
I’m a strong believer that the machine learning community can greatly benefit from better languages to design and communicate the priors in their networks. Applied category theory can be that language, as its abstractions can create the necessary cohesion between all the diverse priors used in the field, and provide inspiration where to go next. I’ve written how it can be used to generalize equivariance in geometric deep learning. If you’re curious, I’ve co-organized a course on the topic.
Mathis Gerdes*, Pim de Haan*, Corrado Rainone, Roberto Bondesan, Miranda CN Cheng:
Learning Lattice Quantum Field Theories with Equivariant Continuous Flows
Under review [arxiv]
Johann Brehmer*, Pim de Haan*, Phillip Lippe, Taco Cohen:
Weakly supervised causal representation learning
NeurIPS 2022 [arxiv]
Pim de Haan*, Maurice Weiler*, Taco Cohen, Max Welling:
Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs
ICLR 2021 (spotlight) [openreview]
Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink
Mesh convolutional neural networks for wall shear stress estimation in 3D artery models
STACOM 2021 [arxiv]
Pim de Haan, Taco Cohen, Max Welling:
Natural Graph Networks
NeurIPS 2020 [arxiv]
Pim de Haan, Dinesh Jayaraman, Sergey Levine: Causal Confusion in Imitation Learning
NeurIPS 2019 (oral, 0.5% of submissions) [arxiv]
Luca Falorsi, Pim de Haan, Tim R. Davidson, Patrick Forré: Reparameterizing Distributions on Lie Groups
AISTATS 2019 (oral) [arxiv]
Pim de Haan*, Luca Falorsi*: Topological Constraints on Homeomorphic Auto-Encoding
NeurIPS 2018 workshop on Integration of Deep Learning Theories [arxiv]
Luca Falorsi*, Pim de Haan*, Tim R. Davidson*, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen: Explorations in Homeomorphic Variational Auto-Encoding
ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models [arxiv]