Ai “Hallucinations” as a New Form of Epistemic Mistake

Authors

  • Aleksandr Shevchenko Institute of Philosophy and Law SB RAS (Novosibirsk)

DOI:

https://doi.org/10.47850/RL.2025.6.4.93-98

Keywords:

AI hallucinations, epistemic error, stochastic parrot, theory of judgment, epistemic responsibility, philosophy of artificial intelligence

Abstract

This article proposes reinterpreting hallucinations in language models not merely as a byproduct of AI’s statistical nature, but as a novel form of epistemic error – an active process that generates false beliefs disguised as reliable knowledge. The author critiques the dominant “stochastic parrot” metaphor, which reduces AI output to passive data repetition devoid of truth claims, and demonstrates that AI outputs functionally equate to judgments as sources of knowledge. Recognizing hallucinations as errors shifts the focus from narrow technical fixes to questions of epistemic responsibility, structural flaws in knowledge production, and the social consequences of trusting machine-generated “judgments.”

Author Biography

Aleksandr Shevchenko, Institute of Philosophy and Law SB RAS (Novosibirsk)

Doctor of Philosophical Sciences, Leading Researcher

References

Кант, И. (1964). Сочинения: в 6 т. Т. 3: Критика чистого разума. М.: Изд-во «Мысль».

Kant, I. (1964). Collected Works. In 6 vols. Vol. 3. Critique of Pure Reason. Moscow.

Barassi, V. (2024). Toward a Theory of AI Errors: Making Sense of Hallucinations, Catastrophic Failures, and the Fallacy of Generative AI. [Online]. Harvard Data Science Review, (Special Issue 5). Available at: https://www.researchgate.net/publication/383732733_

Toward_a_Theory_of_AI_Errors_Making_Sense_of_Hallucinations_Catastrophic_Failures_and_the_Fallacy_of_Generative_AI (Accessed: 10.10.2025).

Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 3-10 March 2021. Virtual Event. Canada. ACM, New York, NY, USA. Pp. 610-623. DOI: 10.1145/3442188.3445922.

Dennett, D. C. (1987). The Intentional Stance. Cambridge. MA: MIT Press.

Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. London. Penguin.

Matilal, B. K. (1986). Perception. An Essay on Classical Indian Theories of Knowledge. Oxford. Clarendon Press.

Šekrst, K. (2024). Chinese Chat Room: AI Hallucinations, Epistemology and Cognition. Studies in Logic, Grammar and Rhetoric. University of Białystok. Vol. 69. No. 1. Pp. 365-381. DOI: 10.2478/slgr-2024-0029.

Herrmann, D. A., Levinstein, B. A. (2025). Standards for Belief Representations in LLMs. Philosophy and Machine Learning. Vol. 35. No. 5. DOI: 10.48550/arXiv.2405.21030.

Wachter, S., Mittelstadt, B., Russell, C. (2024). Epistemic Challenges of Large Language Models. AI & Society. Vol. 39. No. 1. Pp. 55-72.

Published

2025-12-22

How to Cite

Shevchenko А. А. (2025). Ai “Hallucinations” as a New Form of Epistemic Mistake. Respublica Literaria, 6(4), 93–98. https://doi.org/10.47850/RL.2025.6.4.93-98