In Neal Stevenson’s 1995 science fiction novel The Diamond Age, readers meet Nell, a young girl who acquires an extremely advanced book, The Young Lady’s Illustrated Primer. A book is not the usual static collection of text and images, but a deeply immersive tool that can converse with the reader, answer questions and personalize its content, all in the service of teaching and motivating a young girl to be a strong, independent person .
Such a device, even after the introduction of the Internet and tablet computers, remained in the realm of science fiction—until now.
Artificial intelligence, or AI, took a giant leap forward with the November 2022 introduction of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human dialogue. It has sparked a wave of innovation, some of which suggest we may be on the brink of an era of interactive,
super-intelligent tools, not unlike the book Stevenson dreamed up for Nell.
Sundar Pichai, CEO of Google, calls artificial intelligence “deeper than fire or electricity or anything we’ve done in the past.” Reid Hoffman, founder of LinkedIn and current partner at Greylock Partners, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Bill Gates said that “this new wave of artificial intelligence is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the cell phone.”
Over the past year, developers have released a dizzying array of AI tools that can generate text, images, music and video without the need for complex coding, but simply in response to instructions given in natural language. These technologies are
advancing rapidly, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI also raises pressing ethical questions about bias, appropriate use, and plagiarism.
In education, this technology will affect how students learn, how teachers work, and ultimately how
we structure our education system. Some educators and leaders are anticipating these changes with great enthusiasm. Sal Khan, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to bring about “probably the biggest positive
transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate school and college fraud, destroy what remains of privacy and cause mass job losses. The challenge is to harness
the positive potential while avoiding or mitigating the harm.
What is Generative AI?
Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes that we would consider “intelligent” if exhibited by humans, including reasoning, learning, problem solving, and the exercise of creativity. AI systems can be applied to a wide range of tasks, including language translation, image recognition, autonomous vehicle navigation, cancer detection and treatment, and, in the case of generative AI, creating content and knowledge rather than simply searching for it and retrieve.
“Underlying models” in generative AI are systems trained on a large set of data to learn a broad knowledge base that can then be adapted to a set of different, more specific goals. This training method is self-supervised, meaning that the model learns by finding patterns and relationships in the data it is trained on.
Large language models (LLM) are basic models that are trained on a huge amount of text data. For example, the training data for OpenAI’s GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAI’s GPT-3 models were trained on a staggering 300 billion “tokens,” or parts of words, using more than 175 billion parameters to shape the model’s behavior—almost 100 times more data than the company’s GPT-2 model had.
By doing this analysis of billions of sentences, LLM models develop a statistical understanding of language: how words and phrases tend to combine, what topics tend to be discussed together, and what tone or style is appropriate in different contexts. This allows it to generate human text and perform a wide range of tasks, such as writing articles, answering questions or analyzing unstructured data.
LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. These LLMs serve as the “foundations” for AI applications. ChatGPT is built on top of GPT-3.5 and GPT-4, while Bard uses Google’s Pathways Language Model 2 (PaLM 2) as its foundation.
Some of the most famous applications are:
ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was only trained on data up to 2021, and although it is very fast, it is prone to inaccuracies.
ChatGPT 4.0. The latest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5, but also slower and requires a paid account. It also has advanced capabilities through plugins that enable it to interact with content from websites, perform more complex mathematical functions, and access other services. A new code interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.
Microsoft Bing Chat . An iteration of Microsoft’s Bing search engine that is enhanced with OpenAI’s ChatGPT technology. It can browse websites and offer source citations with its results.
Google Bard. Google’s AI generates text, translates languages, writes various types of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bard’s responses can be fine-tuned to be simple, long, short, professional, or casual. Bard also uses Google Lens to analyze images uploaded with prompts.
Anthropic Claude 2. A chatbot that can generate text, summarize content and perform other tasks, Claude 2 can analyze texts of approximately 75,000 words – roughly the length of The Great Gatsby – and generate responses of more than 3,000 words. The model is built using a set of principles that serve as a kind of “constitution” for AI systems, with the goal of making them more useful, fair, and harmless.
These AI systems are improving at a remarkable rate, including in how well they perform on assessments of human knowledge. OpenAI’s GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these achievements particularly
impressive is that OpenAI did not specifically train the system to take these exams; The AI was able to come up with the correct answers on its own. Similarly, Google’s medical AI model significantly improved
its performance on the US Medical Licensing Practice Test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.
These two examples make one wonder: if AI continues to improve so rapidly, what will these systems be able to achieve in the next few years? What’s more, new studies challenge the assumption that AI-generated responses are outdated or sterile. In the case of Google’s AI model, doctors preferred long-form AI responses to those written by their physician colleagues, and non-medical study participants rated the AI responses as more helpful. Another study found that participants preferred the responses of a medical chatbot to those of a physician and rated them significantly higher not only for quality but also for
empathy. What will happen when ’empathetic’ AI is used in education?
Other studies have examined the reasoning capabilities of these models. Microsoft researchers suggest that the newer systems “exhibit more general intelligence than previous AI models” and come “strikingly close to human-level performance.” While some observers question these conclusions, AI systems show a growing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.
Despite their amazing capabilities, these systems are not without
their drawbacks. Sometimes they throw out information that may sound convincing but is irrelevant, illogical, or completely false—an anomaly known as a “hallucination.” Performing certain mathematical operations is another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made particular decisions or predictions can be challenging.
The importance of well-designed prompts
Using generative AI systems like ChatGPT, Bard and Claude 2 is relatively easy. You only need to enter a request or task (called a prompt) and the AI generates a response. Properly constructed prompts are essential to getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in a variety of ways (see the sidebar for examples of AI prompts).
One challenge is that after using search engines for years, people have been pre-trained to phrase questions a certain way. A search engine is like a helpful librarian that answers a specific question and directs you to the most relevant sources for possible answers. The search engine (or librarian) does not create anything new, but effectively retrieves what is already there.
Generative AI is closer to a competent intern. You instruct a generative AI tool through prompts, as you would an intern, asking it to complete a task and produce a product. The AI interprets your instructions, considers the best way to carry them out, and creates something original or performs a task to fulfill your directive. The results are not pre-made or stored somewhere – they are produced on the fly based on the information the trainees (the generative AI) have been trained on. The outcome often depends on the precision and clarity of the instructions (prompts) you provide. A vague or ill-defined prompt can cause the AI to produce less relevant results. The more context and direction you give it, the better the result will be. Something more, the capabilities of these AI systems are enhanced by the introduction of multifunctional add-ons that equip them to browse websites, analyze data files or access other services. Think of it as giving your intern access to a group of experts to help you complete your tasks.
One strategy when using a generative AI tool is to first tell it what kind of expert or person you want it to “be.” Ask him to be an expert management consultant, experienced teacher, writing tutor or copy editor and then give him an assignment.
Prompts can also be constructed to make these AI systems perform complex and multi-step operations. For example, say a teacher wants to create an adaptive curriculum—for any subject, any grade, in any language—that personalizes examples for students based on their interests. She wants each lesson to end with a short answer or multiple choice quiz. If the student answers the questions correctly, the AI teacher should move on to the next lesson. If the student answers incorrectly, the AI must explain the concept again, but using simpler language.
Before, designing this kind of interactive system would have required a relatively complex and expensive software program. With ChatGPT, however, just giving these instructions in a prompt provides a working learning system. It’s not perfect, but remember that it was created practically for free, with only a few lines of English as a command. And nothing on the education market today has the ability to generate nearly limitless examples to connect a lesson concept to student interests.
Chain prompts can also help focus AI systems. For example, an educator might prompt a generative AI system to first read a practical guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By selecting the source material and using the right prompts, the trainer can anchor the responses generated in evidence and high-quality research.
However, like newly hatched interns learning the ropes in a new environment, AI makes the occasional mistake. Such fallibility, while inevitable, underscores the critical importance of maintaining strict oversight of AI output. Monitoring not only acts as a crucial control point for accuracy, but also becomes a vital source of real-time feedback for the system. Through this iterative process of refinement, an AI system can significantly minimize its error rate and increase its efficiency over time.
Using AI in Education
In May 2023, the US Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department held listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report notes that “constituents believe action is needed now to get ahead of the expected rise of AI in education technology—and want to roll up their sleeves and start working together.” People expressed concern about “potential future risks” with AI, but also believed that “AI can enable
education priorities to be achieved in better ways, at scale and at lower costs.”
AI can serve—or already serves—in several teaching and learning roles:
Teaching assistants. AI’s ability to hold human-like conversations opens up possibilities for adaptive learning or learning assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive criticism of students’ writing, which can help students improve their writing skills. Some research also suggests that certain types of prompts can help children generate more fruitful questions about learning. AI models can also support personalized learning for students with disabilities and provide translation for English language learners.
Assistant teachers. AI can handle some of the administrative tasks that prevent teachers from investing more time with their peers or students. Early uses included automating routine tasks such as preparing lesson plans, creating differentiated materials, designing worksheets,
developing tests, and researching ways to explain complex academic material. AI can also provide
educators with recommendations to meet student needs and help teachers reflect, plan, and improve their practice.
Parent assistants. Parents can use AI to generate individualized education plan (IEP) service request letters or request that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI can serve as an administrative assistant by mapping school options within driving distance of home, generating application timelines, collecting contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to the child’s interests.
Administrator Assistants. Using generative AI, school administrators can craft a variety of messages, including parent materials, newsletters, and other community engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules and can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis, which can be useful for measuring school climate and other survey data.
Although the potential is great, most teachers have not yet used these tools. A survey by Morning Consult and EdChoice found that while 60 percent said they had heard of ChatGPT, only 14 percent had used it in their free time and only 13 percent had used it at school. It is likely that most teachers and students will engage with generative AI not through the platforms themselves, but rather through the AI capabilities built into the software. Training providers such as Khan Academy, Varsity Tutors and DuoLingo are experimenting with GPT-4 trainers who are trained on datasets specific to these organizations to provide individualized learning support that has additional safeguards to protect students and improve the experience for teachers.
Google’s Tailwind project is experimenting with an artificial intelligence notebook that can analyze students’ notes and then develop learning questions or provide learning support through a chat interface. These features may soon be available in Google Classroom, potentially reaching half of all US classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachers – differentiating content, creating lesson plans,
providing feedback to students, and serving as an AI assistant to streamline workflow between different applications and tools.
Curriculum and training material providers can also include AI assistants for instant help and training tailored to the companies’ products. One example is edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers instant, personalized academic and customer support for online learners worldwide.
Regardless of the ways in which AI is used in classrooms, the primary task of policymakers and educational leaders is to ensure that the technology serves sound learning practice. As Vicki Phillips, CEO of the National Center for Education and the Economy writes, “We need to think not only about how technology can help teachers and learners improve what they do now, but what it means to enable new ways for teaching and learning are flourishing alongside AI applications.”
Shockwaves and innovation: How nations around the world are tackling AI in education
Other countries are quickly adopting artificial intelligence
in schools. Lessons from Singapore, South Korea, India, China, …
The rapid development of artificial intelligence, especially generative AI (which is trained to analyze large amounts of data and can create original content), has taken US schools by surprise. Partly due to concerns about student cheating, many districts have adopted restrictive policies limiting the use of AI in schools.
I wondered how countries outside the US are dealing with these shockwaves, how they are using AI more broadly to improve education, and what lessons US schools can learn from their approaches.
I found that other developed countries share concerns about student cheating, but are moving quickly to use AI to personalize education, improve language lessons, and help teachers with everyday tasks like grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the
technology.
Several countries began positioning themselves a few years ago to invest in AI in education to compete in the fourth industrial revolution.
Singapore’s Smart Nation strategy, for example, aims to
position the country as a world leader in AI by 2030 by bringing together researchers, government and industry. One goal is to help teachers personalize and improve education for each student, especially those with special needs. An AI-enabled companion will provide personalized feedback and motivation to students, automated grading systems and machine learning to identify how each student responds to classroom materials and activities.
One of the most promising applications of generative AI in education is the ability to adapt learning to the needs of individual students, and there is a clear trend towards personalization in the education strategies of other countries.
South Korea has implemented artificial intelligence-based systems to adapt homework and assignments based on students’ educational levels and “learning tendencies and behaviors.” Each child will have a personalized tutor with artificial intelligence and access to an online learning platform, allowing teachers to focus on social-emotional and hands-on lessons. The education minister says these changes are needed to allow public schools, which currently emphasize rote learning, to provide the same type of personalized and deeper learning that private schools offer. He envisions a future in which assessments occur during the normal course of daily tasks rather than an end-of-course exam.
In India, tech company Embibe uses AI to clarify complex math and science concepts. Students can use a smartphone to scan a passage from a textbook, and the app uses 3-D images to help with visualization. AI is also being used in India to predict student performance, enabling early intervention.
States are also investing heavily in AI teacher training programs and national curriculum requirements. Singapore recently announced a national initiative to build AI literacy among students and teachers to ensure they understand the risks and benefits of the technology. By 2026, training on AI in education will be offered to teachers at all levels, including those in training.
South Korea invests heavily in student training. By 2025, the country aims to include AI courses in its national curriculum at all grade levels, starting with high school. The Keris division of the Korean Ministry of Education is designing and piloting extensive teacher development around AI and other technologies. The ministry’s Center for the Future of Education provides model classrooms where visitors can experience the use of modern technology in education.
Finland, long admired for its high-quality education system and teacher-centric system, has embraced AI with a bold national commitment to educate its citizens with free online courses. Approximately half of schools use the ViLLE platform to provide students and teachers with instant feedback and analysis of student work.
In China, the government has invested heavily—through tax breaks and other incentives—in tools like the adaptive learning platform Squirrel AI that rely on large-scale data sets and surveillance cameras. Most of these products are focused heavily on improving performance on standardized tests so that students whose families can afford it advance. In countries like China, however, ethics, fair access, privacy and other issues are not high priorities.
In contrast, Finland’s AI in Learning , a collaboration between a multidisciplinary group of international researchers and companies, aims to promote the equity and quality of learning locally and globally. The project produced a number of research papers on the ethical use of AI in education and how these technologies can improve teaching and learning. Its members are designing and testing a smart digital system that assesses student health and sends feedback to students and teachers.
Evidence is of great importance now, and many of these countries are investing in research to inform the effective use of new tools as well as to guide regulatory efforts. AICET Research Centre, hosted by AI Singapore and funded by the Smart Nation and Digital Government Office, is working with the Ministry of Education to launch projects that will improve the education system. As part of a five-year plan called AI@NIE, Singapore’s National Institute of Education will invest in research and innovation to use artificial intelligence for education.
Early guidelines are being developed in the US, but are coming late. Although the Department of Education recently published a strong policy document and the American Federation of Teachers issued a resolution on AI, the European Union issued guidelines two years ago and recently updated their proposed regulations on AI. Japan recently released guidelines and selected a few schools to try them out while the government considers which regulations make the most sense.
The US government could start by adding considerations for the use of AI to its National Educational Technology Plan, such as how states and districts should best minimize the risks and maximize the opportunities. Providing guidance on how best to prepare teachers and students for the coming tsunami would help connect the dots in a broader strategy to make America’s students prepared and competitive in the AI economy.
The push that other countries are making for radically personalized education is also one that the US should take seriously. Too many people in education policy dismiss
personalized learning as a holdover from a largely failed effort in the early 2000s, but others see that high-quality curriculum can be harnessed by AI in revolutionary ways and can be tailored in real time to the student’s specific level.
It’s time to up our game on this front and embrace AI-enabled personalized learning like Khanmigo , in part by investing in a national research program in partnership with companies to learn if and how AI can accelerate learning. Finally, we owe it to teachers and districts to offer a comprehensive AI curriculum as soon as possible. This is too important an endorsement to be left to schools, districts, or even states to figure out.
As the US continues to explore the potential of generative artificial intelligence in education, the federal government and states must accelerate their efforts to compete and thrive in an AI-driven world. In addition, education leaders,
researchers, and technology developers will need to collaborate and share best practices and policies to unlock new frontiers in education and equip students with the skills and knowledge needed to succeed in an increasingly a complicated world.
How AI is changing learning in schools – Yahoo Finance
Yahoo Finance
Cutler also commented on the concerns of other education companies, … And one of those companies trying to harness AI in
education is Paper.
technology to fast food chains, artificial intelligence is now everywhere. It has even found its way into schools, with some use cases becoming a problem for educators by now. Paper founder Phil Cutler outlines his tutoring service, which uses AI to “[help] students build a plan that ultimately gives them direction and guidance to be the best student they can be and to be motivated in their academics “. Cutler highlights Paper’s features that teachers can use to track student progress while generative AI adapts along with users.
connection
How to Effectively Implement AI to Transform the Future of Education – Forbes
Forbes
In addition to applying AI to students’ educational journeys, educational
institutions have the opportunity to use AI-powered insights to…
It is safe to say that some of the recently launched Generative Artificial Intelligence (AI) tools like ChatGPT, Bard and DALL-E 2 have suddenly shed a brighter light on AI and its applications in industries.
At the same time, these tools have raised concerns about job losses and the credibility of content produced by the creative industry, but I will focus on concerns about their impact on the education industry.
As the adoption of technology entered the educational space, there were already questions about whether technology would help or hinder the educational process. There is also growing
concern about whether greater access to resources through the Internet, digital learning management systems, and generative AI tools could stifle creativity and intellectual curiosity and make students and educators too dependent on technology.
The reality is that AI is fueled by human intelligence and data, and therefore will be as useful or harmful as people want it to be. In the right hands and with the right set of guardrails,
the positive uses of AI in education can be endless.
Below, I will outline some ways to effectively integrate AI as a tool to achieve improved outcomes in the education industry, while mitigating concerns around
plagiarism, increased reliance on technology, and loss of critical thinking skills.
Use the latest AI applications.
Educational institutions have the opportunity to make learning more engaging by adopting modern tools such as ChatGPT and DALL-E 2. For example, educators can assign projects that encourage students to create specific works of science, art or mathematics based on their interaction with these tools. This creates a way for students to understand difficult concepts in a fun and engaging way.
OpenAI’s recently announced GPT-4 also demonstrates the immense potential of AI in education with its enhanced features. The GPT-4 can act as a virtual tutor, guiding students with real-time feedback that allows for concept practice and learning along the way.
Likewise, there are artificial intelligence tools that can help students learn better in subjects like math, programming, and science. For example, Google’s Minerva can effectively solve quantitative problems or deal with scientific questions.
Educators can also take advantage of immersive experiences powered by AI using AR/VR technology, allowing students to experience both past and future events right from their desks, such as the lives of famous historians or the evolving topography of a particular region.
Make informed academic decisions through AI-powered feedback and analysis.
In addition to applying AI to students’ educational journeys, educational institutions have the opportunity to leverage AI-powered insights for better student and faculty engagement. Likewise, AI applications naturally extend to organization-driven learning initiatives for tomorrow’s workforce.
Educational institutions and organizations have the opportunity to harness the power of generative AI tools and large language models, thereby improving learning. Tools like GPT-4 not only enable iterative learning, but also allow educators to track patterns of students’ learning journeys and modulate courses based on their pace and engagement.
In the context of training and upskilling employees, institutions and course providers can also use AI to generate content for online courses. AI can eloquently chart student learning progress and help personalize the experience with the power of data and grading models around attendance, participation, assignment submissions, and more.
Additionally, there are chatbots that can enable seamless communication with participating students while clearing coursework queries and confusion. AI apps like Khanmigo offer personalized help and guidance to learners.
Use AI to manage plagiarism.
While AI tools are capable of both designing and solving complex, interactive tasks, educators need to empower students to use AI not as a means to reproduce ideas, but to creatively think of new ones.
Amidst the many benefits of generative AI tools like ChatGPT and Dall-E, educators are right to keep concerns about plagiarism at the forefront of their minds. The emergence of generative AI tools has raised new challenges for educators in managing plagiarism due to their ability to generate well-sequenced text, code, and other creative content that is indistinguishable from human work.
Fortunately, AI itself can be a cure for plagiarism. There are several AI-based tools available to educators today. AI tools such as optical character recognition (OCR) can scan photos and printed materials and check for replication in any form.
Additionally, deep scanning tools like Winston AI can check for content authenticity. These tools go beyond identifying duplicate content and also check for content that is similar in intent or meaning.
Be aware of the challenges of using AI to improve education.
However, as AI continues to evolve, we cannot blindly embrace the technology without truly understanding its limitations.
On the one hand, the costs of building reliable AI models can be prohibitive. Furthermore, we cannot turn a blind eye to some glaring issues that have created an urgent need around the world to regulate AI. These include AI-driven biases that can be more costly to the learning process than the cost of AI integration itself if not addressed strategically. Similarly, data security and privacy remain major concerns for both developers and adopters of AI.
As the industry works to address these issues, it can explore workarounds that include adopting AI-powered tools and technologies. This approach allows for incremental steps towards adopting the technology without completely revolutionizing the AI-driven education system.
Ultimately, it comes back to how educators choose to take advantage of emerging new technologies that are here to stay. Education is an industry poised for AI-driven disruption, and how decision-makers in this space choose to turn challenges into opportunities will help define the future of AI-powered education.
Should schools ban or integrate generative AI into the classroom? – Brookings
Institution
Brookings Institution
Public schools are taking different approaches to addressing the effects of generative AI on education, from banning to embracing AI tools.
connection
Education 2.0: The Rise of Writing Services and AI Writing Tools
Helpful or Harmful?
Outlook India
Today, artificial intelligence (AI) writing tools and online essay services are reshaping the academic landscape. But as with any revolution, there is also
a connection
How Teachers Should Approach the Age of Artificial Intelligence (Opinion) – Education Week Education
Week
The recent explosion in artificial intelligence and the ensuing AI weapons race will be one of the biggest changes in education since …
link
Quizlet’s AI learning tools think I’m a bad student – The Verge On the Verge
Quizlet is one of many educational platforms that incorporate generative AI to facilitate learning, despite teachers’ dismay when ChatGPT …
link
SETDA dialogues with state leaders on AI back-to-school considerations
EdTech Magazine
Presenters at the State Association of Education Technology Directors Emerging Trends Forum talk about AI in the context of future careers, …
link
Bringing AI Literacy to High Schools – Stanford HAI
Stanford HAI – Stanford University
Stanford education researchers collaborated with teachers to develop
classroom-ready AI resources for high school teachers in a variety of subjects …
link