Ai “Hallucinations” as a New Form of Epistemic Mistake
DOI:
https://doi.org/10.47850/RL.2025.6.4.93-98Keywords:
AI hallucinations, epistemic error, stochastic parrot, theory of judgment, epistemic responsibility, philosophy of artificial intelligenceAbstract
This article proposes reinterpreting hallucinations in language models not merely as a byproduct of AI’s statistical nature, but as a novel form of epistemic error – an active process that generates false beliefs disguised as reliable knowledge. The author critiques the dominant “stochastic parrot” metaphor, which reduces AI output to passive data repetition devoid of truth claims, and demonstrates that AI outputs functionally equate to judgments as sources of knowledge. Recognizing hallucinations as errors shifts the focus from narrow technical fixes to questions of epistemic responsibility, structural flaws in knowledge production, and the social consequences of trusting machine-generated “judgments.”
References
Кант, И. (1964). Сочинения: в 6 т. Т. 3: Критика чистого разума. М.: Изд-во «Мысль».
Kant, I. (1964). Collected Works. In 6 vols. Vol. 3. Critique of Pure Reason. Moscow.
Barassi, V. (2024). Toward a Theory of AI Errors: Making Sense of Hallucinations, Catastrophic Failures, and the Fallacy of Generative AI. [Online]. Harvard Data Science Review, (Special Issue 5). Available at: https://www.researchgate.net/publication/383732733_
Toward_a_Theory_of_AI_Errors_Making_Sense_of_Hallucinations_Catastrophic_Failures_and_the_Fallacy_of_Generative_AI (Accessed: 10.10.2025).
Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 3-10 March 2021. Virtual Event. Canada. ACM, New York, NY, USA. Pp. 610-623. DOI: 10.1145/3442188.3445922.
Dennett, D. C. (1987). The Intentional Stance. Cambridge. MA: MIT Press.
Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. London. Penguin.
Matilal, B. K. (1986). Perception. An Essay on Classical Indian Theories of Knowledge. Oxford. Clarendon Press.
Šekrst, K. (2024). Chinese Chat Room: AI Hallucinations, Epistemology and Cognition. Studies in Logic, Grammar and Rhetoric. University of Białystok. Vol. 69. No. 1. Pp. 365-381. DOI: 10.2478/slgr-2024-0029.
Herrmann, D. A., Levinstein, B. A. (2025). Standards for Belief Representations in LLMs. Philosophy and Machine Learning. Vol. 35. No. 5. DOI: 10.48550/arXiv.2405.21030.
Wachter, S., Mittelstadt, B., Russell, C. (2024). Epistemic Challenges of Large Language Models. AI & Society. Vol. 39. No. 1. Pp. 55-72.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
https://oc.philosophy.nsc.ru/remote.php/webdav/%D0%94%D0%BE%D0%B3%D0%BE%D0%B2%D0%BE%D1%80%20%D1%81%20%D0%B0%D0%B2%D1%82%D0%BE%D1%80%D0%BE%D0%BC%20RL-%D0%BF%D1%80%D0%B0%D0%B2.doc