By R. Esi Asante (PhD)
Many people have developed the habit of relying on generative artificial intelligence (AI) to do almost everything for them.
The overreliance on AI for creatives, education, academics and business activities, rather than critical thinking, is scary especially going into the future. It has become an addiction for some and many are divesting their critical thinking abilities.
For instance, people rely on the advice of AI for financially risky and ethically pertinent decisions which sometimes lead to undesired results. This situation is especially prevalent in scenarios where the advice contradicts available information and participants’ own convictions (Klingbeil et al., 2024).
Students, at all levels of education, the current workforce in the corporate world, have developed the habit of relying on AI tools such as search engines and recently, ChatGPT with its limitless capabilities, to do most things from preparing assignments, essays, proposals, emails and projects to important decisions of life.
It is becoming abnormal to rely on one’s cognitive faculties to explore the ecosystem. According to Dedyukhina (2025), people find themselves progressively giving more work to generative AI, thinking that it saves time and makes them smart. This is evident in the automation of work processes and procedures which has its benefits though.
With AI’s ability to assist with a wide range of tasks, such as writing, research, data driven-decision making, and data analysis, researchers find themselves uploading almost everything to AI to do the work and in the corporate and business context, AI has taken over analyzing complex texts among others.
The overreliance on AI is very alarming. The tendency to make man lazy cognitively and dependent on AI for everything going into the future is disturbing seeing that our intelligent quotient (IQ) and critical thinking skills such as memory, attention, and problem solving, are suffering.
It is evident that generative AI is becoming more intelligent, mimicking human intelligence and according to research, becoming more intelligent than man.
Thomas (2024) observed that as AI robots become smarter and more dexterous, the same tasks will require fewer humans and while it is estimated to create 97 million new jobs by 2025, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workforces. It is also important to know that the use of AI is not without risks.
Recent research reveals a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.
Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants (Gerlich, 2025). Put in another way, another researcher indicates that the more we use Gen AI, the more we start outsourcing our cognitive abilities to it.
In other words, we use it as a brain crutch, and it comes with the cost of not being able to process information ourselves anymore (Dedyukhina, 2025). This article takes a critical look at the cost of using AI tools to the detriment of the human faculty, the consequences of overreliance, and recommendations to reduce the tendency.
Critical thinking and Offloading
Critical thinking, is the ability to analyse, evaluate, and synthesise information to make informed decisions (Halpern, 2010). It involves the ability to think clearly and rationally, understand logical connections between ideas, evaluate arguments, and identify inconsistencies in reasoning (Ennis,1987).
It is vital for scholarly achievement, professional competence, and informed nationality, its components including problem-solving, decision-making, and reflective thinking, are essential for striving in multifaceted and vibrant settings (Halpern, 2010).
Cognitive offloading involves the use of external tools and agents to reduce cognitive load. It improves efficiency by freeing up cognitive resources.
In some cases, however, extensive reliance on external tools—particularly AI—may reduce the need for deep cognitive involvement, potentially affecting critical thinking. (Risko and Gilbert, 2016).
While reducing mental strain, cognitive offloading affects cognitive development and critical thinking (Fisher, 2011), leading to a decline in internal cognitive abilities.
Findings by Gerlich (2025) generally show that frequent use of AI is negatively correlated with critical thinking skills (i.e. ability to analyze, evaluate, and synthesize information to make reasoned decisions), the tendency to focus on where to find information instead of the information itself, thereby worsens memory retention, problem-solving abilities, and independent decision-making).
Younger users (17-25 years) have been found to be more susceptible as they use AI more frequently, scoring low on critical thinking.
Dedyukhina (2025) contended that factors that predict cognitive decline in order of importance included frequency of the use of AI tools, that is the more use, the higher the risk; the education level of users for instance, the higher the educational level the lesser the risk of the user.
Other factors included the lack of deep thinking activates, AI decision reliance, where individuals ask AI about everything, and attitudes for example, saves time and so people are more likely to use it.
Thus, in times when cognitive abilities are critically needed to stay relevant, we now unlearn how to think. Are we actually outsourcing our cognitive abilities to AI? The consequences may be dire in the near future.
The Cost and Consequences of Overreliance On AI for The Future
Studies have shown that reliance on AI to provide advice and make decisions or suggestions to humans can actually damage their mindset. While critical thinking will be lost from overreliance, AI will begin to shape the way humans think, live their lives and also predict outcomes (Kalezix, 2024).
It is a fact the AI offers opportunities and the potential for a new technological revolution, self-learning computer algorithms together with open AI since 2022, brings in a new wave of digitalization and the reliance on it for everything.
AI influences cognitive abilities.
The analytical dimension of critical thinking involves breaking down complex information into simpler components to understand it better.
AI tools such as data analytics software and machine learning algorithms can enhance analytical capabilities by processing vast amounts of data and identifying patterns that might be difficult for humans to detect (Bennett and McWhorter, 2021).
However, there is a risk that over-reliance on AI for analysis may undermine the development of human analytical skills. Individuals who depend too heavily on AI to perform analytical tasks may become less proficient at engaging in deep, independent analysis.
Cognitive offloading.
Cognitive offloading through AI tools involves delegating tasks such as memory retention, decision-making, and information retrieval to external systems. This can enhance cognitive capacity by allowing individuals to focus on more complex and creative activities.
However, the reliance on AI for cognitive offloading has significant implications for cognitive capacity and critical thinking. This may lead to a reduction in cognitive effort, fostering what some researchers refer to as ‘cognitive laziness’ (Carr, 2010).
It could also lead to a decline in individuals’ abilities to perform these tasks independently, potentially reducing cognitive resilience and flexibility over time (Sparrow and Wegner, 2011). Additionally, it could also erode essential cognitive skills such as memory retention, analytical thinking, and problem-solving.
Reduced Problem-Solving Skills.
Recent studies highlight the growing concern that while AI tools can significantly reduce cognitive load, they may also hinder the development of critical thinking skills (Zhai et al. 2024). Krullaars et al. (2023) reported that overreliance on AI tools for academic tasks led to reduced problem-solving skills, with students demonstrating lower engagement in independent cognitive processing.
These findings underscore the need for a balanced approach to AI integration in educational contexts, ensuring that cognitive offloading does not come at the expense of critical thinking development. While AI tools can improve basic skill acquisition, they may not foster the deep analytical thinking required for applying these skills in novel or complex situations (Firth et al., 2019).
Over-reliance on AI for learning can hinder the development of critical thinking skills, as students become less skillful at engaging in independent thought.
Intelligent tutoring systems (ITSs), which simulate one-on-one tutoring experiences through AI algorithms, have been shown to improve learning outcomes, particularly in STEM fields (Koedinger and Corbell, 2006).
However, these systems may contribute to cognitive offloading, where students rely on the system to guide their learning rather than engaging actively with the material.
Loss of Human Influence
In some part of society, the overreliance on AI could result in the loss of human influence and functioning. AI is now used in every sector of society and for instance using AI in the health sector could result in reduced human empathy and reasoning.
Consistent application of AI for creative endeavours for example, could also diminish human creativity and emotional expression, while interacting with systems too much could lead to a reduction in peer communication and social skills which are already evident in most families (Thomas, 2024).
Available studies on the issues concerning over-reliance on AI dialogue systems, for instance report trends where users exhibit an overreliance on AI dialogue systems, often accepting their generated outputs, AI hallucination, without validation.
The overdependence is exacerbated by cognitive biases where judgments deviate from rationality and the use of mental shortcuts, leading to uncritical acceptance of AI-generated information (Gao et al., 2022; Grassini, 2023).
Additionally, most AI systems are trained on databases with inherent prejudices causing users to regard these biased outputs as objective. This leads to a misplaced trust in AI, with the tendency of tilting analysis and interpretations, further entrenching the biases (Xie et al., 2021).
Over-reliance on unverified AI outputs can cause misclassification and misinterpretation. This poses a significant risk, potentially culminating in research misconduct, including plagiarism, fabrication, and falsification.
Dempere et al. (2023) highlighted the risks associated with embedding AI dialogue systems in higher education, such as privacy violations and illegal data use.
Generally, the dangers of artificial intelligence include automation-spurred job loss, deep fakes, privacy violations, algorithmic biases caused by bad data, socioeconomic inequality, market volatility, weapons automation and uncontrollable self-aware AI.
There is also lack of transparency and explainability, social manipulation through algorithms, social surveillance with AI technology and weakening ethics and goodwill because of AI (Thomas, 2024).
Late Pope Francis warned against AI’s ability to be misused in his message on World Peace Day noting that it could “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace” (Pope Francis, Jan, 2024)
Recommendations to Reduce the Tendency
The results by Gerlich (2025), highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. The danger is that AI and robots are learning very fast and they are getting smarter than humans and may develop negative motives of a takeover, to take control of humans.
According to Thomas (2024), this is no longer science fiction but a serious problem that could arrive soon. it is important for organisations to begin to think of what to do. Researchers have suggested the following as ways to mitigate the risk of AI generally: developing legal regulations, establishing organizational AI standards and discussions on monitoring algorithms, the need to guide technology with humanities perspectives and reduce biases.
When we begin to think that ChatGPT can do the work for us and we rely solely on it, we are becoming cognitively lazy and with time, may lose our cognitive abilities especially when they are most needed.
This poses a critical risk to the human race, which could cause workforce disruption as the critical skills needed for the future workforce, become absent. There is a need to clean up, keep human cognitive abilities intact and not outsourcing them to technology. It’s important to take deliberate actions to reduce and maybe curb these tendencies.