Gregor Betz is professor of philosophy of science at the Karlsruhe Institute of Technology. Since the release of GPT-2 in 2019, he has been pursuing computational philosophy projects at the intersection of NLP and AI involving so-called large language models. (See highlights below.)
Before turning to GenAI, Gregor has been studying the limits of scientific prediction, esp. in economics and climate science, the role of values in science, requirements of democratic scientific policy advice, and the ethics of climate engineering. He has developed a formal theory and computational models of argumentative debate, and applied these methods to clarify key concepts in epistemology, to interpret classical texts, to assess consensus- and truth-conduciveness of debate, as well as to to improve critical thinking teaching.
Moreover, in 2023, Gregor has founded Logikon AI, a startup that applies critical thinking methods to improve generative AI.
Check our blog for updates about recent computational philosophy projects.
M.A. Philosophy, Political Sciences, Mathematics, 2002
Freie Universität Berlin
Dr. phil (Ph.D.) Philosophy, 2004
Freie Universität Berlin
Dr. habil. (Habilitation) Philosophy, 2008
Freie Universität Berlin
Pioneering work on Chain of Thought — previously termed “Thinking Aloud” — that was later picked up by Wei et al., who demonstrated its effectiveness with GPT-3. ↪︎
Critical Thinking for Large Language Models: Some of the first systematic studies of LLM’s argumentation skills, demonstrating that LLMs can learn and extrapolate inference schemes, and that they can be turned into multi-step meta-reasoners that logically analyse argumentation. ↪︎ ↪︎
LLM-based multi-agent simulations — the first of its kind according to this review — showing that LLM-based agents can be placed in conversational settings to study models of multi-agent debate and natural language opinion dynamics. ↪︎
Design of a fallacy detection task that has been accepted as part of the Big-Bench hard (BBH) evaluation suite and remains difficult to solve for SOTA LLMs.
Bridging the gap between AI and formal epistemology by showing that continous reflective equilibration allows LLMs to become epistemic agents that hold logically & probabilistically coherent beliefs and can consistently learn from novel evidence. ↪︎ ↪︎
Descartes' »Meditationen über die Grundlagen der Philosophie«. Ein systematischer Kommentar. Stuttgart: Reclam 2011. [Amazon]
Debate Dynamics: How Controversy Improves Our Beliefs. Synthese Library. Dordrecht: Springer 2012. [SpringerLink] [Illustrative movie]
“Are climate models credible worlds? Prospects and limitations of possibilistic climate prediction”, European Journal for Philosophy of Science 5, 2015, S. 191-215. [link]
“Truth in evidence and truth in arguments without logical omniscience”, British Journal for the Philosophy of Science 67, 2016, 1117–1137. [link]
“Critical Thinking for Language Models”, with Kyle Richardson and Christian Voigt, in: Proceedings of the 14th International Conference on Computational Semantics, 2021. [link]
Argumentationsanalyse. Stuttgart: Metzler 2020. [Amazon] [Website]
Check for full list of publications.
Karlsruher Institut für Technologie
Institut für Philosophie
Build. 09.20
Douglasstraße 24
76133 Karlsruhe
Germany