Weiyang Liu
72076 Tübingen
Germany
I am a machine learning researcher at Max Planck Institute for Intelligent Systems, Tübingen. In my PhD study, I was fortunate to be supervised by Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful time at Georgia Tech, Google Brain, Nvidia Research, and MERL.
I work primarily on principled modeling of inductive bias in learning algorithms. My research seeks to understand how inductive bias affects generalization, and to develop "light-yet-sweet" learning algorithms: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.
Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (causality, discrete knowledge) and how they can benefit generalization as a guiding principle. Recently, I have begun rethinking inductive bias in the era of foundation models, and developed a deep interest in large language models and generative modeling across visual, textual, and physical domains. More specifically, my current research focuses on (i) developing principled adaptation algorithms for foundation models, and (ii) understanding how LLMs perform reasoning and eliciting it in formal/verifiable scenarios (math/symbolic reasoning).
I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler.
Note: I have openings for (remote) interns and PhD students (2026 intake). Feel free to reach out and discuss with me.