Jonas Geiping

Independent Research Group Leader ELLIS Institute & MPI-IS

prof_pic.jpg

Tübingen, Germany

ELLIS Institute

Maria-von-Linden Straße 2

Hi, I’m Jonas. I am a ML researcher in Tübingen, where I lead the research group for safety- & efficiency- aligned learning (🦭). Before this, I’ve spent time at the University of Maryland and the University of Siegen.

I am constantly fascinated by questions of safety and efficiency in modern machine learning. There are a number of fundamental machine learning questions that come up in these topics that we still do not understand well. In safety, examples are questions about the principles of data poisoning, the subtleties of water-marking for generative models, privacy questions in federated learning, or adversarial attacks against large language models. Or, more generally: Can we ever make these models “safe”, and how do we define this? Are there feasible technical solutions that reduce harm?

Further, I am interested in questions about the efficiency of modern AI systems, especially for large language models. How efficient can we make these systems, can we train strong models with little compute? Can we extend the capabilities of language models with recursive computation? How do efficiency modifications impact the safety of these models, and vice versa?

In short:

  • Safety, Security and Privacy in Machine Learning
  • Efficient Machine Learning (especially in NLP)
  • Trustworthy AI
  • Deep Learning as-a-Science

Incoming PhD Students:

If you are interested in these topics, feel free to reach out for more information! I’m admitting students on a yearly basis through the following PhD programs:

For more details, make sure to read the openings page carefully.

Selected Publications

2024

  1. Coercing LLMs to Do and Reveal (Almost) Anything
    Jonas Geiping, Alex Stein, Manli Shu, Khalid SaifullahYuxin Wen, and Tom Goldstein
    arxiv:2402.14020[cs], Feb 2024
  2. Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
    Abhimanyu HansAvi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha SahaMicah GoldblumJonas Geiping, and Tom Goldstein
    In Proceedings of the Forty-first International Conference on Machine Learning, Jul 2024
  3. Transformers Can Do Arithmetic with the Right Embeddings
    Sean Michael McLeish, Arpit Bansal, Alex Stein, Neel JainJohn Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas GeipingAvi Schwarzschild, and 1 more author
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems, Sep 2024

2023

  1. Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
    Liam H. Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojciech Czaja, Micah Goldblum, and Tom Goldstein
    In International Conference on Learning Representations, Feb 2023
  2. Cramming: Training a Language Model on a Single GPU in One Day.
    Jonas Geiping, and Tom Goldstein
    In Proceedings of the 40th International Conference on Machine Learning, Jul 2023
  3. A Watermark for Large Language Models
    John KirchenbauerJonas GeipingYuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein
    In Proceedings of the 40th International Conference on Machine Learning, Jul 2023