Bethge Lab AI Research Group
The Bethge Lab is an AI research group at the University of Tübingen focused on advancing agentic, lifelong learning systems inspired by human cognition. The group emphasizes open-ended, data-centric machine learning, scalable compositional learning, and multi-modal foundation models to enable rapid retrieval, reuse, and integration of knowledge for scalable and flexible learning.
Overview
- Mission: Develop autonomous, adaptable AI systems capable of open-ended knowledge acquisition, planning, and reflection, mirroring the open-ended nature of human learning.
- Core philosophy: Open-ended evaluation, scalable compositional learning, and data-centric approaches to machine learning.
- Research scope: From foundational model evaluation to mechanistic interpretability and human–machine collaboration, with an emphasis on generalization over time and across tasks.
Research Focus Areas
- Open-ended model evaluation & benchmarking
- Post-dataset era evaluation: models operate on evolving data/tasks, with considerations for safety, domain contamination, and compute costs.
- Lifelong/continuous benchmarking: tools and concepts for transparent, scalable evaluation and continual scientific model building.
- Language Model Agents
- Autonomous thinking, communication, and reasoning systems.
- Applications include theorem proving, automated scientific discovery, and web information aggregation for near-term predictions under uncertainty.
- Lifelong compositional, scalable, and object-centric learning
- Reusability of past experiences for future tasks.
- Compositional representations and object-centric perception as building blocks for scalable lifelong learning.
- Development of benchmarks and methods that merge compositionality with practical lifelong learning.
- Modeling brain representations & mechanistic interpretability
- ML models for neural data analysis to understand distributed processing in neural populations.
- Digital twins and detail-on-demand models of brain areas (retina, visual cortex).
- Tools for interpreting, comparing, and understanding representations and computations in neural networks.
- Attention in Humans and Machines
- Studying human attention to improve attention mechanisms in ML.
- Benchmarking across modalities (image, video) for saliency, scanpath prediction, and eye movements in VR.
- AI sciencepreneurship and startups
- Exploring scalable, economically feasible AI solutions with real-world impact.
- Collaboration with startups (e.g., Maddox AI, Black Forest Labs).
Broader Context & Partnerships
- Collaboration with researchers such as Felix Wichmann, Alexander Mathis, Ralf Engbert, and Christoph Teufel.
- Affiliation with Ellis (European Laboratory for Learning and Intelligent Systems) via Bethgelab.
- Engagement with education and outreach initiatives through BWKI (Bundeswettbewerb für Künstliche Intelligenz) and IT4Kids.
Examples of Work & Impact
- Open-ended benchmarking concepts enable transparent evaluation across evolving tasks.
- Development of language model agents for advanced reasoning and web-based information synthesis.
- Exploration of lifelong, object-centric learning to build scalable AI that can accumulate knowledge over time.
- Mechanistic interpretability research toward understanding how neural populations compute and learn.
How to Engage or Learn More
- Explore published work and ongoing projects under the Bethge Lab umbrella.
- Follow collaborative opportunities with industry partners and academic collaborators.
- Participate in, or learn from, interdisciplinary efforts bridging neuroscience, cognitive science, and machine learning.
Features and Capabilities
- Open-ended evaluation and scalable lifelong benchmarking for AI systems
- Multi-modal foundations supporting rapid retrieval, reuse, and compositional integration of knowledge
- Language model agents capable of autonomous reasoning, theorem proving, and web-based information synthesis
- Lifelong, compositional, and object-centric learning frameworks
- Mechanistic interpretability and neural data analysis tools for understanding brain-like computations
- Attention modeling in humans and machines to improve ML attention mechanisms
- AI science entrepreneurship and startup collaborations
- Partnerships with international research ecosystems (ELLIS, BWKI, IT4Kids) to broaden impact