In machine learning, the term stochastic parrot is a disparaging metaphor, coined by Emily M. Bender and colleagues in 2021, that frames large language models as systems that statistically mimic text without real understanding.
Subsequent research and expert commentary, including large-scale benchmark studies and analysis by Geoffrey Hinton, have challenged this metaphor by documenting emergent reasoning and problem-solving abilities in modern LLMs.
In their paper, Bender et al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots". According to the machine learning professionals Lindholm, Wahlström, Lindsten, and Schön, the analogy highlights two vital limitations:
The argument has drawn criticism, employing two common logical fallacies: (1) a straw man that portrays all language-model research as naïve word-string mimicry, and (2) a false dichotomy that casts systems as either stochastic parrots or fully sentient AIs, ignoring evidence of intermediate reasoning capabilities.
Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".