Jonas Geiping

Independent Research Group Leader ELLIS Institute & MPI-IS


Hi, I’m Jonas. I am starting as a research group leader in Tübingen, where I’m building a group for safety- and efficiency- aligned learning (🦭). Before this, I’ve spent time at the University of Maryland and the University of Siegen.

I am mostly interested in questions of safety and efficiency in modern machine learning. There are a number of fundamental machine learning questions that come up in these topics that we still do not understand well – such as the principles of data poisoning, the subtleties of water-marking for generative models, privacy questions in federated learning, or adversarial attacks against large language models. Can we ever make these models “safe”? Is every language model API an invitation for the user to jailbreak it and do what they want?

Further, I am interested in questions about the efficiency of modern AI systems, especially for large language models. How efficient can we make these systems, can we train strong models with little compute? Can we extend the capabilities of language models with recursive computation? How do these questions impact the safety of these models?

In short:

  • Safety, Security and Privacy in Machine Learning
  • Efficient Machine Learning (especially for NLP)
  • Trustworthy AI
  • Deep Learning as-a-Science

Incoming PhD Students:

If you are interested in these topics, feel free to reach out for more information! I’m currently hiring through the following PhD programs:

For more information, be sure to check out the openings page.

Selected Publications


  1. Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
    Liam H. Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojciech Czaja, Micah Goldblum, and Tom Goldstein
    In International Conference on Learning Representations, Feb 2023
  2. Cramming: Training a Language Model on a Single GPU in One Day.
    Jonas Geiping, and Tom Goldstein
    In Proceedings of the 40th International Conference on Machine Learning, Jul 2023
  3. A Watermark for Large Language Models
    John Kirchenbauer, Jonas GeipingYuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein
    In Proceedings of the 40th International Conference on Machine Learning, Jul 2023
  4. Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
    Gowthami Somepalli, Vasu Singla, Micah GoldblumJonas Geiping, and Tom Goldstein
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jul 2023
  5. Tree-Ring Watermarks: Fingerprints for Diffusion Images That Are Invisible and Robust
    Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein
    arxiv:2305.20030[cs], May 2023