I am a PhD student in machine learning under supervision of Max Welling at the QUVA lab of the University of Amsterdam, as well as a Research Associate at Qualcomm AI Research.
Previously, I obtained masters degrees in theoretical physics in Cambridge and in AI in Amsterdam. I wrote my thesis on Causal Imitation Learning as visiting scholar at the Robotic AI and Learning Lab at UC Berkeley supervised by Sergey Levine.
Have a look at my Resume.
My reserach interests are in deep learning with strong inductive priors.
Some of these priors are about symmetries. I’ve built neural networks that perform message passing on meshes, tackling the gauge symmetry of how to orient the convolutional kernel. This work has been applied to model blood flow through arteries. Furthermore, I’ve worked on the symmetries of graphs and lattice quantum field theory.
Other priors are about the causal structure of the data. Some problems of imitation learning and reinforcement learning, can not be solved with infinite data, but only by incorporating the prior knowledge of the causal structure. Sometimes, this involves learning the causal representation of variables and mechanisms from data.
I’m a strong believer that the machine learning community can greatly benefit from better languages to design and communicate the priors in their networks. Applied category theory can be that language, as its abstractions can create the necessary cohesion between all the diverse priors used in the field, and provide inspiration where to go next. I’ve written how it can be used to generalize equivariance in geometric deep learning. If you’re curious, I’ve co-organized a course on the topic.