Large Language Models and Theory of Mind. An analysis with Indirect Speech Acts.
Feb 21, 2024·
,·
0 min read
Agnese Lombardi
Alessandro Lenci

Abstract
Large Language Models (LLMs) have shown strong performance on natural language tasks, sparking claims that they may possess Theory-of-Mind (ToM) abilities. Since ToM is often considered essential for pragmatic understanding, tasks like Indirect Speech Acts (ISAs)—which rely on interpreting context and implied meaning—are used to assess this. However, some argue that LLMs might succeed in these tasks through pattern recognition or idiomatic knowledge rather than true mentalizing. To explore this, the authors introduce a new benchmark based on False-Belief Tasks, designed to test whether LLMs (and humans) adjust their interpretations based on the speaker’s beliefs, thereby revealing whether LLMs’ success truly reflects ToM capabilities or just surface-level associations.
Date
Feb 21, 2024 12:00 AM
Event
Th-XPRAG – IGG49 Pre-conference Workshop
Location
IUSS Pavia, Linguistic Department