Artificial Intelligence (AI) is rapidly changing our world and will continue to do so. As teachers, AI gives us new tools to enhance our teaching and professional development. More importantly, though, AI has significant and far-reaching implications for what and how we should teach, and we must reflect on these implications. As we enter a new industrial revolution, how can we equip the next generation with the discernment and skill required to navigate and contribute to this new world?
Understanding AI’s emergence
Artificial Intelligence is simply the development of computer systems to perform tasks we usually associate with human intelligence. For example, human intelligence enables us to solve problems, understand language and speech, learn from patterns in data, make decisions, understand what we perceive with our senses, and so on. A human looks at the world around them and can easily make observations such as “the blue car is driving down the road, while a pedestrian is waiting to cross at the traffic light”. Such a seemingly simple observation is not an easy task for a computing device such as a computer or a phone.
When we endow a computing device with the ability to perform such tasks through software and systems we develop, we describe this as “artificial intelligence” or AI. AI is embedded in many of the systems we use every day.
For example, if we use Google Maps or a similar tool to find directions to a location, chat online with the customer service system of our telephone provider, use a tool that translates text from one language to another, or if our phone or word-processing application autocorrects our writing or predicts text for us to type, we are interacting with AI technology.
Recently, AI has gone through a resurgence due to rapid progress in extremely difficult domains. Recent developments impacting us include the development of “large language models” that have enabled the creation of chatbots (for example, ChatGPT or Google Bard) that can answer questions and hold highly realistic and often very useful conversations with humans. More broadly, this type of AI falls into the category of what is described as “Generative AI” or AI that can produce high-quality content such as text, graphics and video. This new wave presents significant opportunities for enhancing the way we teach and learn.
For example, I recently asked Google Bard, “how can I help students develop an intuitive understanding of homogenous transform matrices?” I got some very useful suggestions for better teaching this mathematical concept with applications in computer graphics and robotics. With image-generation tools, we can generate content that makes our teaching materials more visually engaging.
When we teach online, we can also benefit from AI tools that can track the progress of individual students, providing automated grading, feedback and assessment. Such tools are beginning to emerge in well-known learning management systems.
AI tools can also help students learn. I recently asked Google Bard to compute the linear velocity of a wheel of radius 5cm rotating at three revolutions per second and to write a computer programme to print all odd numbers from 1 to 100. It executed both tasks excellently and even gave a step-by-step explanation of its solution to the former problem, which would be very helpful for a student striving to understand a concept.
Bringing AI awareness to our classroom
However, it is essential to understand that AI tools, as with all technology, are imperfect. In this age of AI, it is more important than ever to help students develop their skills in critical thinking, discernment, fact-checking and crosschecking information. AI tools can generate very plausible sounding content which may not necessarily be accurate. For example, when I asked Google Bard to give me a list of recent textbooks on “autonomous robots”, it gave me a list of great-sounding books by authors known to be experts in the field. However, when I searched for those books, I found that several did not exist.
AI can also give biased information based on the data that it is trained with. When I asked an image generation tool to generate images of “a doctor in Ghana” it created pictures of only male doctors. In generating an image of “a nurse in Ghana”, all the images it created contained only female nurses.
And when AI works well, there may still be concerns about its use. I asked Canva, which has a generative AI tool, to show an image of “floods inside the Cape Coast Castle in Ghana”, and it generated a very realistic image which was completely fake. This would be a concern if I wanted to use that image to spread false information.
Furthermore, when I asked Google Bard to “write an essay analysing the current economic situation in Ghana”, it came up with a 783-word essay with good content. If a student submits such an essay as their own work for an assignment, this would clearly be cheating and the grade received would not reflect the students’ abilities, but rather the capabilities of an AI stystem.
Against this backdrop, these new developments in AI, though exciting, can also be challenging and worrying for teachers and educational institutions. How do we know that our students are doing the work themselves and hence learning, rather than simply relying on content generated by AI tools? Are those of us who consume content from AI tools, including teachers, applying our critical thinking skills? Do we believe information just as it is shared by an AI tool? If students see a video or an image of something, do they automatically believe what they see or hear, or do they know how to look for cues that may help them understand whether something is real or an AI-generated fake?
If students simply use AI tools as a shortcut to complete assignments and tasks, this severely limits their learning and ultimately creates larger problems in their work when they graduate. As teachers, we know that every assignment we give students has learning objectives, and they only achieve them by doing the work and going through the process themselves. Just as we would not want them to simply copy a solution from a classmate, neither do we want them to copy a solution from an AI tool.
Being able to understand what they are being taught, as an outcome, enables students to apply acquired knowledge in many ways within and beyond their discipline. This is why this new age requires us to go back to basics, think carefully about what problems students may face as they engage the world, and design our teaching material and projects to help them develop the skills and knowledge to tackle these problems meaningfully.
Being explicit with students about what they are learning and why, and discussing with them how to use the tools to enhance rather than defeat the learning objectives, helps motivate them to be more intentional about their education. It also enables us to emphasise the ethical posture we are helping students develop as we establish the current and future implications of taking credit for work they have not done.
Complementing this approach with learning assessments designed to have students demonstrate both skill and understanding – such as project-based assessments with demonstrations, oral exams and exams that probe understanding – helps refocus students on what they are learning rather than what AI tools can produce.
Preparing students for the world of AI
Most importantly, we must prepare our students for a world where AI is prevalent. AI is already being applied with impressive results in domains as varied as agriculture, medicine, and manufacturing, enhancing the efforts of professionals in these fields. Such developments are only going to accelerate.
In the world in which AI is prevalent, our students must have the knowledge and skills to participate not only as consumers or users of this technology but as creators and shapers of it, applying it in creative ways to address some of the pressing challenging problems facing society, and helping our leaders formulate the relevant policies to guard against potential harmful applications. That means our students need much more awareness of computer science and its subdisciplines of machine learning and AI, and we need to develop a critical mass of innovators with expertise in these areas.
At Ashesi, we continue to work to set an example for this. We are taking steps each year to help our students increase their awareness, introducing new courses and programmes focused on these critical skill areas.
Our third-decade strategic plan includes the rollout of a new Master’s programme in Intelligent Computing Systems that will bring advanced machine learning and AI education to students in the programme. We also encourage and support students to pursue research in these areas, contributing locally relevant knowledge to the world’s understanding of how AI can be leveraged to address challenging problems.
However, as we teach students how to harness this new world’s technology, we must also intentionally help them develop an understanding of what it means to be human. They must be able to enhance and apply those human qualities, such as empathy, critical thinking, ethics, common sense and creativity, that are not easily replicated by artificial intelligence. We must train our students to always ask good, probing questions, to think for themselves and to investigate the facts of a matter.
We must help them become more culturally aware and understand their roles and responsibilities within their different communities. We must help them value ethics and understand the harm that could come from using AI tools in unethical ways. Educating their minds and hearts can help our students confidently navigate this new world and create new tools that advance economic and social well-being in Africa and the world.
>>>The author is Head of Computer Science and Information Systems at Ashesi University, where she teaches Artificial Intelligence, Robotics, Data Structures, Algorithm Design, and Programming. Watch her speak about how Ashesi is contributing to AI research and education in Africa here: https://ashe.si/atindaba2023