Gregor Betz

Gregor Betz

Professor of Philosophy of Science

Karlsruhe Institute of Technology, DebateLab

Biography

Gregor Betz is professor of philosophy of science at the Karlsruhe Institute of Technology. He’s been studying the limits of scientific prediction, esp. in economics and climate science, the role of values in science, requirements of democratic scientific policy advice, and the ethics of climate engineering. He has developed a formal theory and computational models of argumentative debate, and applied these methods to clarify key concepts in epistemology, to interpret classical texts, to assess consensus- and truth-conduciveness of debate, as well as to to improve critical thinking teaching.

For a couple of years now, Gregor has been pursuing computational philosophy projects at the intersection of NLP and AI involving so-called large language models.

Moreover, in 2023, Gregor has founded Logikon AI, a startup that applies critical thinking methods to improve generative AI.

Check our blog for updates about recent computational philosophy projects.

Interests
  • Artificial Intelligence
  • Philosophy of Science
  • Computational Philosophy
  • Argumentation
  • Democracy
Education
  • M.A. Philosophy, Political Sciences, Mathematics, 2002

    Freie Universität Berlin

  • Dr. phil (Ph.D.) Philosophy, 2004

    Freie Universität Berlin

  • Dr. habil. (Habilitation) Philosophy, 2008

    Freie Universität Berlin

AI Highlights

Pioneering work on Chain of Thought — previously termed “Thinking Aloud” — that was later picked up by Wei et al., who demonstrated its effectiveness with GPT-3. ↪︎

Critical Thinking for Large Language Models: Some of the first systematic studies of LLM’s argumentation skills, demonstrating that LLMs can learn and extrapolate inference schemes, and that they can be turned into multi-step meta-reasoners that logically analyse argumentation. ↪︎ ↪︎

LLM-based multi-agent simulations — the first of its kind according to this review — showing that LLM-based agents can be placed in conversational settings to study models of multi-agent debate and natural language opinion dynamics. ↪︎

Design of a fallacy detection task that has been accepted as part of the Big-Bench hard (BBH) evaluation suite and remains difficult to solve for SOTA LLMs.

Bridging the gap between AI and formal epistemology by showing that continous reflective equilibration allows LLMs to become epistemic agents that hold logically & probabilistically coherent beliefs and can consistently learn from novel evidence. ↪︎ ↪︎

Featured Projects

  • DebateLab [link]
  • Argdown [link]
  • Making Reflective Equilibrium Precise (SNF-DFG) [link]
  • Argumentation at School (DFG Network Fund) [link]

Contact

Karlsruher Institut für Technologie

Institut für Philosophie

Build. 09.20

Douglasstraße 24

76133 Karlsruhe

Germany