Introduction
As artificial intelligence (AI) systems advance at unprecedented speed, a troubling paradox emerges: while machines become more capable, the average human engagement with knowledge, critical thinking, and digital literacy is arguably stagnating—or worse, regressing. This blog post addresses the disjunction between the rise of AI and what we might provocatively call the “rise of stupidity,” arguing that humanity’s potential will be squandered unless we collectively embrace a culture of co-learning with AI, rather than outsourcing intellect to it.
The Rise of AI
From deep learning breakthroughs in image recognition (Krizhevsky et al., 2012) to the emergence of large language models like GPT-4 (OpenAI, 2023), AI is becoming a general-purpose technology—a "GPT" in the economic sense (Brynjolfsson et al., 2018)—with transformative potential across sectors. Its capacity for pattern recognition, reasoning, and synthesis is no longer confined to narrow tasks; AI is now co-authoring scientific papers (Nature, 2023), diagnosing rare diseases (Topol, 2019), and writing code with surprising accuracy (Chen et al., 2021).
Yet, as AI systems grow more autonomous, their users—human beings—are often engaging with them passively. Instead of using these tools to augment cognition, many rely on them in a way that displaces thinking altogether. This is not an issue with the technology itself, but with its adoption and the surrounding educational culture.
The Rise of “Stupidity”
“Stupidity” in this context is not a moral failing but a systemic one. It manifests in poor media literacy, the decline of reflective thought, and a preference for convenience over understanding. Research shows that people are increasingly overconfident in their knowledge when assisted by AI systems—a phenomenon dubbed the “Google effect” or digital amnesia (Sparrow et al., 2011). With LLMs, this effect may be amplified: users might trust authoritative-sounding text regardless of factual correctness, leading to epistemic complacency (Bender et al., 2021).
Furthermore, social media platforms—algorithmically optimized for engagement, not accuracy—have produced echo chambers that reward emotional rather than analytical responses (Pariser, 2011). When such trends converge with AI tools that can generate plausible but fabricated content, the result is not collective intelligence, but epistemic inflation: more information, less understanding.
AI as a Mirror and a Mentor
Humans will not become more intelligent simply by coexisting with AI. Intelligence is not absorbed by proximity; it must be developed through interaction. To thrive, we must adopt a mindset of co-learning—treating AI not as an oracle, but as a partner in reasoning. This is especially true in education, where AI can scaffold critical thinking if used correctly (Luckin et al., 2016), or undermine it if used merely to complete tasks faster.
A 2020 study by Holmes et al. highlighted that students who engaged with intelligent tutoring systems reflectively—questioning feedback and asking why—showed greater learning gains than those who followed AI-generated prompts blindly. Similarly, in data science and medicine, practitioners who treat AI as a second opinion rather than a substitute make more accurate decisions (Rajpurkar et al., 2017).
(R)Evolution or Extinction of Thought?
The danger of widespread cognitive outsourcing is not merely a loss of skill, but a redefinition of agency. If we rely on AI to make judgments we no longer understand, we abdicate responsibility. As philosopher Luciano Floridi warns, AI ethics is not just about what machines do, but what they make us become (Floridi et al., 2018). If AI rises while public reasoning falls, we may soon have a population governed by systems it cannot interrogate—a digital feudalism where knowledge is centralized and opaque.
Conclusion
The future is not AI versus human intelligence, nor AI instead of human intelligence, but AI with human intelligence. To resist the drift into passive stupidity, we must embed AI literacy in education, cultivate skepticism over blind trust, and normalize dialogue with machines as a cognitive process, not a shortcut.
AI alone won’t make us smarter.
Only smarter use of AI will.
Only by learning with AI—through collaboration, critique, and curiosity—can we avoid the dystopia of brilliant machines serving disengaged minds. The rise of AI can be the rise of human intelligence, but only if we choose it to be.
References
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623).
Brynjolfsson, E., Rock, D., & Syverson, C. (2018). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. NBER Macroeconomics Annual, 33(1), 1–63.
Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., ... & Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.
Holmes, W., Bialik, M., & Fadel, C. (2020). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097–1105.
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.
OpenAI. (2023). GPT-4 Technical Report. arXiv:2303.08774.
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin UK.
Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017). CheXNet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.
Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.


