AI in The Future of Learning

Will there be a digital Aristotle in your laptop?

Aristotle’s Metaphysics begins: “All men by nature desire to know.” Curiosity, and feeling satisfaction from learning, are core to humanities. AI is tantalizing, bursting with new capabilities and hyped to solve every human need. The world’s pace of change is accelerating and every person needs to learn better and faster. What roles can AI play? Global productivity growth and the self-esteem of billions depend on enabling continuous learning for everyone, everywhere.

Charting trends in AI it is likely that this rapidly growing technology will help us to increasingly meet fundamental needs of learners and instructors. AI has tremendous potential to democratize access to an exceptional education. 

Learners (that’s all of us): AI can soon provide essential advances for in-person and remote learning.

Instructors: you will not be replaced by a cyber-teacher for a very long time, if ever.

Entrepreneurs: opportunities abound if you have the discipline to focus on meeting the needs of Learners and Instructors. The pursuit of a perfect cybernetic tutor, a digital Aristotle, is an unbounded problem offering continuous chances for innovation.  

All solutions start with understanding the problem to solve. What is Learning? 

What is Learning? 

There are fundamental traits common to all successful human learning. The renowned neuroscientist, Stanilas Dehaene, detailed in his 2019 book, “How We Learn,” recent studies of the human brain. They support many of the empirical concepts of learning developed by educators over the past century. Dehaene explains four pillars for successful learners. Professional educators are very familiar with them:

1. Attention to salient information: The Learner can attend to the job of learning.  

2. Active Mental/Emotional Engagement: The Learner isn’t going through the motions, she is excited, engrossed, and experimenting with the content. Master Learners take risks. Children are frequently Master Learners.

3. Prompt error Feedback: Quick, constructive, and relevant coaching that rewards risk-taking. 

4. Consolidation: Performing meaningful, correct practice. We may not have realized it in secondary school, but there was a purpose behind all of that homework.

Attending, engaging, and practicing is hard work! How do the most successful learners do it? In interviews with expert instructors in diverse settings (1), another pillar appears. It is a mindset supporting our native passion for learning with both the belief that learning can happen and the perseverance to make it happen. Carol Dweck’s theory of the Growth Mindset, and Angela Duckworth’s concept of Grit, explain this trait of keen Learners. A gifted Instructor is required.

The connection between Learner and Instructor shapes each of us (parents are Instructors, after all). Learners and Instructors share a symbiotic relationship. Tools for learning, including AIs, must serve both. The experts interviewed highlighted empirically proven and widely-taught requirements and techniques of master Instructors. The following Table links the five pillars of learning with specific Instructor roles.

Required for LearningRole of The Instructor
A Learning Mindset: Desire to Learn, Belief in the ability to Learn, Perseverance.– Promote a Growth MindsetConstant personal reinforcement that the Learner can and will learn
– Support development of Grit. Example: demanding and supportive interactions
Focus (Attention) on salient information and actions– Create an environment that is inviting, safe, comfortable, and with few distractions – Monitor attention to salient information and discern underlying causes for lack of attention
Active Mental/Emotional Engagement. Taking risks.(Young learners are Piaget’s “little scientists,” always experimenting.)  – Assess readiness and present material at just the right level: challenging, but not frustrating 
– Explicit teaching with alternating periods of explanation and hands-on experimenting. I do, we do, you do 
– Set specific, relevant goals 
– Encourage risk-taking, pushing for full use of the Learner’s ability 
– Vary the forms of assignment and instruction 
– Monitor engagement: know when the Learner is frustrated, bored, distressed, or distracted, and bring him/her back on track
Prompt Feedback that is Relevant and Constructive.– Explain errors; don’t just mark right/wrong and give a grade.
– Reward risk-taking. Identify achievements and provide positive, constructive, relevant feedback. Don’t just say ‘Good job,’ explain what was good and why.
– Diagnose causes for errors and provide appropriate guidance
Consolidation: Gaining Automaticity – Design meaningful, relevant practice both in class and outside. Assess progress toward specific learning goals.
Five Pillars of Learning and the Instructor’s role in supporting them

When Will AI Meet The Needs?

How might AI meet these fundamental Learner-Instructor needs? These are early days, but AI is already delivering meaningful benefits as an assistant to the Instructor. Newly emerging AI algorithms aim to address more complex needs, particularly attention and motivation. Some roles, especially where the Instructor judges the Learner’s intent and engages in dialogue about ideas, remain too distant to predict. 

Here And Now: Mostly Incremental

Most current AI learning tools are administrative and tutoring aides, an incremental step forward from non-AI software.  Microsoft Azure’s Question Bot uses natural language processing (NLP) to scan questions from large lecture cohorts and assigns those questions to the appropriate teaching assistant. Azure’s QnA Makergets trained to recognize the relationship between questions and answers. The tool builds a knowledge base as the course is repeated. After several iterations, the Question Bot generates responses to students’ questions. 

When used correctly, these bots assist Instructors in spending more time on roles in the Table above that an AI cannot fill. However, the temptation exists to just serve more students at the same level of quality and with fewer Instructors.

Carnegie Learning’s Mathaia and MathiaU advance the field. They tackle a more sophisticated role: presenting math concepts at just the right level for an individual Learner. The optimal level is challenging but not frustrating. These tools act like tutors leading the Learner through lessons. Real-time updates to the student’s human Instructor alert when additional live teaching is needed.

A smart companion to Mathia and MathiaU is in development. Researchers at Carnegie Mellon University’s (CMU’s) Human-Computer Interaction Institute want to enable Instructors to design their own customized AI math tutors. This partnership with an AI lets an Instructor use his/her own best methods and examples to meet an individual Learner’s needs. It is analogous to training a teaching assistant in best practices, with the advantage that the assistant is available to every Learner all the time. 

Nascent Capability: Understanding Affect and Providing Motivation

Beyond CMU’s AI tutor designer, systems are emerging from R&D labs to monitor attention and to provide motivational coaching. 

Affective computing aims to understand your mood, including attention level. A bevy of companies is using deep learning to judge your affect. They use facial expressions (Affectiva, Hanwang Education, Hikvision, Emotient – acquired by Apple), vocal patterns (Cogito, CompanionMx, Sensay Analytics, Vocalis Health), skin conductance (Imotions), and even brain waves (BrainCo).  

Affectiva’s deep learning algorithms can judge concentration, confusion, boredom, anger, distraction, joy, apathy, and more. Using this ability, an AI could alert an Instructor not only that a Learner is inattentive, but also why. It could address a critical missing feature in contexts like remote learning and MOOCs and thereby help Instructors to improve and differentiate content.  

In leveraging new technology, the “Why” matters much more than the “What.” Two recent affective computing trials highlight what goes wrong when Instructor and Learner needs are not the core motivation. These cases also show the pitfalls of not considering ethical issues like privacy and bias. 

Several public schools in China use video-based affective computing systems from Hikvision and Hanwang Education. They continuously monitor every student. Algorithms discern whether a student is paying attention to the Instructor, talking to a classmate, writing, sleeping, or even just looking at nothing. These systems do what every teacher already does: where is the added value? 

The products generate regular reports, shared with parents, detailing students’ daily activity. The students, fearful of their parents’ ire, focus their attention on looking like they are paying attention. These trials are tragically wasted opportunities.

A robust trial of affective computing could detail which students are deeply engaged, off-track, or indifferent, and suggest why. The Instructor could experiment with modifications to content, environment, delivery, etc., with instant feedback. And without robo-notes to parents.

Many students and their families were not asked permission for the monitoring. Like many deep learning applications, affective computing remains vulnerable to training data bias. For example, two learners from different cultures with identical expressions can register differently, one thoughtful and the other angry. For all of its potential in education, affective computing is young and needs diligent application design. 

Another emerging AI solution requiring careful introduction is motivation tools. The Nudge Engine from Humu. Is being applied to corporate training. This field requires changing learned behaviors in successful, mature professionals. They must unlearn as much as they must learn. All of the precepts of human learning apply, especially the Growth Mindset and Grit.  

Humu was co-founded in 2017 by CEO, Lazslo Bock, previously Senior Vice President of People Operations at Google for nearly 11 years. The Nudge Engine uses the concept of Nudges, for which Richard Thaler won the 2017 Nobel Prize in Economics. It sends precisely timed reminders for behavior-changing actions like, “Ask your team for input on this decision,” or “Speak up about the issue in the next team meeting.” Aggregated nudges can influence complex behaviors like promoting diversity and sustaining morale during COVID-19. Humu focuses on solving a fundamental learning need, the “Why,” not just showing off the technology.  

When Will An AI Critique This Post? 

Another elusive pillar of learning for AI is delivering relevant, constructive feedback. It is especially hard for non-fiction writing like persuasive essays, analyses, and business plans. Unlike understanding attention and supporting motivation, there are no solutions available, and the path to developing them is unclear.

It is a severe hole. A Learner’s most impactful moments happen in intense discourse with the Instructor: vetting ideas, arguing perspectives, explaining facts, and finally achieving mutual understanding. It is a defining human activity. 

The need exceeds today’s widely-used Automated Essay Scoring (AES) software. MOOC’s heavily leverage this technology. The asymmetry of one Instructor lecturing to 5,000 Learners doesn’t support relevant, customized feedback for the essays those Learners submit back to the Instructor. AES feedback is typically limited to a grade. The argument goes that a grade is better than nothing, but grades alone offer minimal learning benefit.  Learning happens only when the Learner understands and practices the error corrections in a detailed critique.

A detailed 2019 study of AES noted that Learner feedback beyond a single number score is sorely lacking. Former MIT Professor, Les Perlman, has led notable efforts against so-called essay “robo-scoring” systems. His students have gamed the tools to give top grades to gibberish.  IBM’s Project Debater will never be hacked by gibberish.

Photo by Sebastian Herrmann on Unsplash https://unsplash.com/@officestock

As Deep Blue topped Gary Kasparov in chess, Watson defeated Ken Jennings in Jeopardy, and Alpha Go conquered Ki Jie in Go, Project Debater aims to topple a human world-champion. The contest: debate. Debating requires forming a persuasive argument on a topic, immediately critiquing an opponent’s case, and making a contrast-and-compare summary of evidence to persuade judges in a final statement. In February 2019, after seven years of development, Project Debater dueled a human champion, Harish Natarajan, on the resolution: “Should we subsidize preschools?” Project Debater competed brilliantly, but the live audience of 800 in San Francisco voted Natarajan the winner. A key reason: emotion. 

You can listen to the debate here. The AI generated thoughtful narrative and logic. While it spoke with human-like intonation and pronunciation, Project Debater’s delivery was dry and ironically mechanical. 

Cognitive scientist, Matt Botvinick, Director of Neuroscience Research at Google’s DeepMindremarked this month that human-agent interaction is still a frontier for AI research. “Basically, we don’t know how to make an AI with a warm personality. We don’t know how to engineer that.” 

Among Aristotle’s three artistic proofs of ethos, pathos, and logos, pathos eludes AI.

Still, the existence of a purpose-built AI that can articulately critique an original 4-minute essay and simulate a brief, reasoned discussion with a world-class human intellect is notable. But will that ability be available to Learners and Instructors in U.S. History MOOCs in five years? No. 

David Ferrucci, the principal investigator and team leader for IBM’s Watson, commented in a 2019 interview  about the pitfalls of forecasting the growth of purpose-built AIs: “There really was this mistake that we repeatedly made about AI where we look at accomplishments on these narrow tasks, and we draw that line and say ‘wow, if you can do that it means you can do this.” Repurposing Project Debater into “Project Essay Reader” will not be straightforward. 

Beyond the Horizon: The AI Thought Partner

Elemental Cognition, Ferrucci’s new company, has the audacious goal of creating a cybernetic thought partner. His vision is an AI that can speak your language, that can coach you like an expert on how to reason about a topic. He believes such an AI will have a mental model of language, will “understand” language instead of treating it as data and finding statistical correlations. When can that happen?

Ferrucci doesn’t venture to guess. He notes that even defining the measurement criteria for such a tool can be richly debated. When asked when the thought partner, “CLARA,” (Collaborative Learning and Reading Agent), might be a product, he says it will be 2022 before he might even be able to give a date. 

Exceptional writing evokes strong emotion and imagery. AI experts cannot honestly predict when an AI Instructor will discuss with a Learner the depth of the opening sentence of Joan Didion’s Slouching Towards Bethlehem, “The center was not holding.”

When Is Not Only About Technology: Timing Will Be Market Driven

The fragmentation of the learning solutions market significantly impacts solution development. Geography, culture, subject, level, and context all matter. Each permutation may need a unique narrow AI. It is still easier for a team with AI expertise to make money somewhere else. For example, one of the most successful uses of affective computing to date is Amazon Go. As Jeff Hammerbacher, then of Facebook, famously told Business Week in 2011, “The best minds of my generation are thinking about how to make people click ads; that sucks.” 

Unbounded Opportunity 

Despite the challenges young technology and market fragmentation, unmet needs that impact every person, everywhere, for their entire lives, offer abundant opportunities. Techniques and methods that can evolve over the decades, regularly delivering new benefits, are attractive investments. Waiting for the technologies to be perfect only guarantees that someone else gets there before you. 

Now is the time to experiment with affective computing solutions and motivation tools, especially for the burgeoning rolls of remote and independent Learners. Carnegie Learning’s early success with AI assessment algorithms that finely tune content for unique Learners makes additional investment enticing. Because today’s AES mainly spit out a number grade there may be, as Ke and Ng suggest, low-hanging fruit. Formatting and presenting the tools’ sub-scores to the Learner could be far more relevant than a grade alone.

Chris H.L. Wicher, former Chief Cognitive Scientist at KPMG, once said to me, “Just using AI to cut costs by providing the same service with less headcount is a plan for going out of business.” Applying that broad industry wisdom to education, only using AI to teach students at the same level of quality with fewer instructors is misguided. Business success always comes back to meeting customer needs. AI efforts not focused on the fundamental process of how humans learn will wander as short-lived tactical solutions. 

Future Resolve: Don’t Be Afraid of Big Ambitions

David Ferrucci urges AI researchers to let their imagination drive their expectations, to be unafraid to talk about what they truly want to achieve. In the case of AI for Learning, let us dream about a perfectly matched Instructor for every Learner, a personal Aristotle for everyone. 

Accept that we don’t know when, if ever, an AI can achieve that ambition. In the meantime, which will be a very long time, embrace the uncertainty and plan seriously for AI’s to be trusted partners of Instructors and Learners at all levels. This unbounded problem offers limitless opportunities for impact and innovation. 

Cover Image from Gery Wibowo on Unsplash: https://unsplash.com/@gergilgad

Acknowledgements

(1) Mark Watson, MA, is CEO of ABI Wellness, enabling victims of chronic brain injury to learn to regain cognitive function. Howard Eaton is Founder/Director of the Eaton Education  Group, a series of Y schools in Canada applying principles of neuroscience in the instruction of children and adults with learning disabilities. He is the author of two books on educating persons with learning disabilities. Alexandra Dunnison, M.Ed-Harvard, has thirty years of experience teaching at multiple levels with particular expertise in literacy and numeracy. 

Special thanks also to Dr. Michael Housman, Co-Founder and Chief Data Science Officer of RapportBoost.AI, and Chris Wicher, former VP-Engineering for the Watson project at IBM.

Leave a Reply

Your email address will not be published. Required fields are marked *