Tredinnick, Luke and Laybats, Claire (2025) Epistemic decay: generative artificial intelligence and the recombination of culture. Business Information Review, 42 (3). pp. 148-151. ISSN 1741-6450
Generative Artificial Intelligence is a transformational technology that augurs profound socio-cultural change on a scale that may ultimately surpass the impact of the Internet and the World Wide Web. But although offering clear benefits and opportunities, its rise has also been met with anxiety about its near and long term effects. We have previously addressed in Business Information Review for example the impact of generative technologies on professional roles (Tredinnick, 2017) and the ethical implications of artificial intelligence (Laybats and Tredinnick, 2024). There has also been widespread alarm at the growing use of AI in the creative industries (Amankwah-Amoah et al., 2024; Bender, 2025) particularly advertising, publishing and the media. In addition, apocalyptic fears attend to the anxiety of a coming technological singularity, the point at which machines will surpass humans intelligence, initiating a snowball effect of every increasing machine capabilities and ultimately dominance (Shanahan, 2015).
Some of these perceived risks are no doubt overstated; while significant challenges and some structural transformation will accompany the wider use of generative technologies there will also be new opportunities and emerging markets. However, one potential risk has garnered less attention despite being perhaps the most immediate of them all. Generative artificial intelligence may be contributing to a gradual erosion of the epistemic foundations of our technologically and scientifically dependent culture. This possibility arises not from their apparent ability to create new knowledge, nor from the quality and reliability of the outputs that they produce, but from the ways in which generative applications have become implicated in a progressive recirculation of material culture. Successive generations of generative technologies may bring improved accuracy and fewer hallucinations, but these iterative improvements may have little or no impact of the problem of epistemic decay. This editorial explores the profound threat posed by generative artificial intelligence to our long-term understanding of what we believe we know, and what steps we can take to mitigate those risks.
![]() |
View Item |