Educators’ worry about AI boils down the concept of 'Goldilocks zone.' A learning task should neither be too challenging nor too simplistic, but just right, fitting within the learner's zone of proximal development. It is something that the learner can first solve only with help, but eventually internalized and can solve on their own. The concern is that AI, in its current form, might be overstepping this boundary, solving problems on behalf of learners instead of challenging and guiding them. It is like that rookie teacher that keeps solving problems for students and rewriting their papers, and then wonders why they have not learned anything. I just want to acknowledge that this concern is very insightful and is grounded in both theory and everyday practice of teachers. However, the response to it isn't that simple. AI cannot be dismissed or banned based on this critique.
First, there's the question of what skills are truly worth learning. This is the most profound, fundamental question of all curriculum design. For instance, we know that certain basic procedural skills go out of use, and learners leapfrog them to free time to concentrate on more advanced skills. For example, dividing long numbers by hand used to be a critical procedural skill, and it is not worth the time, given the ubiquity of calculators. There is a legitimate, and sometimes passionate debate whether the mechanics of writing is such a basic procedural skill that can or cannot be delegated to the machines. I don’t want to prejudge the outcome of this debate, although I am personally leaning towards a “yes” answer, assuming that people will never go back to fully manual writing. However, the real answer will probably be more complicated. It is likely that SOME kinds of procedural knowledge will remain fundamental, and others will not. We simply do not have enough empirical data to make that call yet. A similar debate is whether the ability to manually search and summarize research databases is still a foundational skill, or we can trust AI to do that work for us. (I am old enough to remember professors insisting students go to the physical library and look through physical journals). This debate is complicated by the fact that AI engineers are struggling to solve the hallucinations problem. There is also a whole different debate on authorship that is not quite specific to education, but affects us as well. The first approach is then to rethink what is worth teaching and learning, and perhaps focus on skills that humans are really good at, and AI is not. IN other words, we reconstruct the “Goldie locks zone” for a different skill set.
The second approach centers on the calibration of AI responses. Currently, this is not widely implemented, but the potential exists. Imagine an AI that acts not as a ready solution provider but as a coach, presenting tasks calibrated to the learner's individual skill level. It is sort of like an AI engine with training wheels, both limiting it and enabling the user to grow. This approach would require creating educational AI modules programmed to adjust to the specific needs of each user’s level. We have the Item Response Theory in psychometrics that can guide us in building such models, but I am not aware of any robust working model yet. Once the Custom GPT feature starts working better, it is only a matter of time for creative teachers to build many such models.
Both approaches underscore the importance of not dismissing AI's role in education but rather fine-tuning it to enhance learning. AI is here to stay, and rather than fearing its overreach, we should harness its capabilities to foster more advanced thinking skills.
These are conversation we cannot shy away from. It is important to apply some sort of a theoretical framework to this debate, so it does not deteriorate into a shouting match of opinions. Either Vygotskian or Brunerian, or any other framework will do. Vygotsky has been especially interested in the use of tools in learning, and AI is just a new kind of tool. Tools are not note all created equal, and some are better than others for education. The ultimate question is what kind of a learning tool AI is, and whether we could adjust learning, adjust the tool, or do both.
AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Notebook LM: A quintessential Google Move
Google, once a powerhouse in artificial intelligence and a major force in shaping the modern internet, has found itself surprisingly behind ...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...