No doubt you’ve heard of ChatGPT (Chat Generative Pre-Trained Transformer) and the way it’s been quickly shaking our world. As an emerging technology, many are wondering how its arrival will affect all areas of life—including jobs, education, and the quality of human thinking. An AI language model created by OpenAI and released November 2022, ChatGPT was fed millions of text data to learn to most naturally respond to text-prompts. As the bot’s capabilities grow, many are excited to see the ways it will develop and how it will enhance our world; others are afraid of its effects. Regardless of the response, there’s no doubt that its emergence will have a huge impact on numerous sectors—higher education being one of the big ones.
What’s the current conversation?
While some universities and public schools across the U.S. have already chosen to ban the use of ChatGPT, many institutions recognize that there’s no way to avoid this growing tool. Others agree that even if it could be avoided, it’s still better to use it—as a tool to elevate productivity and thinking. In response to the notion that ChatGPT and other AI bots will destroy human creativity and critical thought, writing faculty at Duke University put it simply: “The calculator didn’t destroy math.” In the same article, some of the writing faculty at Duke explain that ChatGPT can be a powerful tool we can come alongside to elevate overall productivity and performance, that ChatGPT can be the tool “performing rudimentary tasks so [human] writers can focus on higher-level concerns.”
Cynthia Alby, a professor of teacher education in Georgia, writes in an article posted in Magna publications, “We already use so many other things to check our writing (Google to fact-check, MS Word and Grammarly to check grammar/spelling, thesaurus to find the ideal word, synonyms when we want to sound different/more accurate/better); we look online for example resources when we need to find an example of a good version of what we are writing. Is this cheating?” Cynthia argues that using AI as a tool to help elevate our work—as the calculator elevated mathematical processes—can elevate human productivity to the next level. Using ChatGPT to do preliminary work while humans focus on the more creative, advanced aspect of writing could save time and energy and support higher productivity.
Despite others also believing that ChatGPT can be a powerful tool, there are still other concerns particular to higher education. As new information is being shared constantly, what are some of the concerns in today’s conversation?
Many professors and teachers particularly of writing courses and other courses where ChatGPT can be used to produce class work are concerned about students taking direct credit for the bot’s work and replacing learning with a ChatGPT conversation. Few people can deny this is a major concern, especially in higher education, but many experts offer ideas to work around the issue of bot-induced academic dishonesty. How?
- Through making policies. Preventive measures are valuable to defining how people should and should not use it in the classroom or for classroom work. Illinois’s Center for Innovation in Teaching & Learning (CITL) emphasized the importance of talking about academic integrity in classes. These policies can be pushed through discussing how ChatGPT can be used productively and not harmfully in an educational setting through open discussions and including sections in syllabi.
- Through AI-detection technology. AI-Classifier (by OpenAI) is an AI detection tool, created because of the many who were afraid that ChatGPT will fuel academic dishonesty at all levels. It is designed to take a piece of text and rank it on a gradient scale of “very unlikely, unlikely, unclear, possibly, likely” to predict whether the piece was AI-produced. However, this detection tool created by OpenAI is not perfectly accurate, so it should not be solely relied upon. Published in an article posted by the Sydney Morning Herald, Jan Leike, head of OpenAI’s alignment team, warned that the AI-detection technology is “not foolproof,” is “imperfect,” and that it “shouldn’t solely be relied upon when making decisions.” Likewise, the AI-detection tool only works with a minimum word count of 1,000 words. The longer the piece is, the more accurate the prediction is.
As this AI-detection tool continues to develop and other apps (like Turnitin) have begun to develop their own AI-detection, some experts believe that AI-detection is a losing battle. Ahmed Elbanna, Associate Professor of Civil & Environmental Engineering, predicts that it’s “definitely a losing game to rely on just detection. We probably need to instead evolve our response to how we are going to leverage the tool.”
- Through curriculum adjustments. One way to evolve with the evolution and use of AI is to evolve our curriculum. Sara Shrader, Director of Online Learning at the College of Applied Health Sciences, explains “someone will always cheat or find a way to cheat. Banning ChatGPT or trying to detect AI-written text is not going to stop people from cheating. Instead, we have to adjust our curriculum to help the plagiarism problem.” Sara says that this is a call for instructors to have a more holistic, authentic learning model in the classrooms, a call to rethink how academics—particularly assessment—happens. “Working in College of Applied Health Sciences, the idea of ‘applying’ knowledge is in the name. My colleagues and I are advocating for more project-based learning (applied learning) and authentic assessments. In other words, we are pushing for creative assessments that can’t be easily completed by a bot.”
In Student Life, an independent newspaper at Washington University, Ed Fournier (director from Washington University’s Center for Teaching and Learning) suggests a similar approach to changing curriculum to avoid the bot replacing student work and thus, student learning. When he asked ChatGPT to solve “complex, creative problems,” it didn’t perform as well as when it asked questions about recalling pieces of information. “I tried to think of more creative questions that asked for more analysis and comparison. It cranks out text, but the answers were not nearly as convincing and would not have been full point answers.”
Tawnya Means, Assistant Dean for Educational Innovation & Chief Learning Officer at the Gies College of Business, shares another approach to adjusting curriculum and assessment. “We can now focus more on the process instead of just the product,” she explains, “for example, producing a series of documents as an assignment instead of just one document. I can engineer a prompt into ChatGPT that can solve whatever question I’m providing as an assessment, then produce multiple drafts, and then the final product. As an instructor, I could look at it from beginning to end and give feedback from the process of learning instead of just the end product.”
For more practical tips to adjusting curriculum, CITL also explained how to “Re-think current writing-based assignments” and “Use alternative assignments/assessments.”
While the concern for plagiarism certainly is at the top of the list of many who are skeptical of AI—and even for those who love it—other concerns also range on the top of the list for many.
H. Chad Lane, an associate professor in the College of Education’s Educational Psychology, talks about privacy and ChatGPT in a College of Education article, stating that “it’s a benefit/cost question we have to ask. There’s always the risk that someone with access misuses your data because it’s sitting on a server somewhere. But in a lot of cases, we have some pretty robust mechanisms for privacy and safety. For example, de-identifying data, using IDs rather than names.”
Another common concern is the fear of people relying too much on AI bots for information, which could lead to more circulated misinformation. ChatGPT learns how to write and respond by what it has read. But that doesn’t mean that it is able to fully discern what is true and what is not. In an age of much misinformation and potential human bias even amidst truths, ChatGPT learning from published and available data is a potential nightmare. H. Chad Lane explains in the same article that “It’s critical to teach kids that AI systems are driven by human data,” he says. “Anything ChatGPT tells you is derived from a knowledge base from human content. So, it could be wrong. As long as kids realize what it tells them is not 100 percent accurate, it’s not an oracle, that’s good. Teach healthy skepticism. That’s a good thing in general.” To combat continuing misinformation, Lane says that it’s critical to teach people how to use ChatGPT as a tool, and not see it as the answer.
CBS News also spotted the danger of ChatGPT as a source of information. In a news report, Timnit Gebru, an AI researcher specializing in the ethics of AI, warns that we should be terrified of ChatGPT — if we are going to trust every word it says is true. “Experts worry that people will use ChatGPT to flood social media with phony articles that sound professional, or bury congress with grassroots letters that sound authentic.” The concern for ChatGPT spreading both deliberate and coincidental misinformation emerges when ChatGPT becomes the source of answers instead of a tool to elevate human productivity.
How are people in higher education currently (or planning on) using ChatGPT?
Many experts and educators suggest using ChatGPT as a tool instead of an end. What are some examples of ways we can use ChatGPT in higher education as a tool to enhance teaching and learning instead of replacing it?
Antonio Hamilton, Graduate Teaching Assistant at the Center for Writing Studies says that as the app continues to develop, it can be an interesting tool for English as a Second Language (ESL) students. Especially during the moments they aren’t with a teacher—at home, during late nights—it can be a great tool to help with translation. Antonio explains that ChatGPT for ESL or English language novices can be a benefit in terms of speed. ChatGPT can help with understanding writing assignments and allow them to write in their own languages before having ChatGPT or other automated systems translate their writing into English. These programs might be able to create a better experience transitioning into English writing environments and norms.
Ahmed Elbanna, Associate Professor in Civil & Environmental Engineering, says that people could learn to prompt the chatbot to write codes. Then after the bot generates a code, instructors can ask students to correct, change or use the code as a part of bigger assignment. Ahmed also suggests using ChatGPT as a brainstorming partner for a variety of types of assignments. “You can seek suggestions or ideas. Although it is limited by the data it knows, and though it will give you better suggestions about topics where it has access to more data, it is still powerful. It will build hierarchical layers of knowledge as you converse with it.”
Mike Tissenbaum, Assistant Professor in Curriculum & Instruction, suggests that AI is really good at processing patterns in both qualitative and quantitative data quickly. This can help in a variety of ways—in the classroom, in research, and more. He also suggests that AI can be a partner to teaching itself. He says, “[AI] can also be used to make suggestions for grouping students for optimal collaborations and discussions based on their understanding of the content. And it can provide students the materials or prompts to scaffold discussions.”
Tawnya Means agrees, explaining that ChatGPT “can be a partner in teaching. If you don’t have time to, for example, write ‘smarter’ learning objectives, or develop rubrics with detailed feedback for students, instructors can have the bot do it for them, then focus their work on interacting with students, giving personal feedback to the students—bringing value as a human in that way.”
The initial impact of ChatGPT has been massive, yet some are already predicting greater implications for the future.
While testing out a couple of different AI generators, Tawnya Means found she was able eventually able to combine the few tools she was using to create an animated image that read a script from a different text generator. “With AI, you can create something that doesn’t exist right now,” she explains, “So it’s important to think about what AI can create—not only what kind of essays, but what else it can develop that doesn’t even exist yet but may become a step in a new direction.”
Ted Underwood, Professor of English and Associate Dean for Academic Affairs at the School for Information Sciences, also believes that it’s important to think 10 years into the future. “This will be pretty disruptive. Ten years from now, these students will be getting jobs in places where ChatGPT and other AI generators are stronger than they are right now. We need to prepare our students for jobs in the next decade. A lot of our courses currently are training them to recombine existing ideas and textual evidence. But that kind of recombination is easy to automate in the future. So we should now teach them more than that if we really want to be preparing them for the future. We should be thinking creatively about what students will be doing then, and then prepare them to live in a world where this is everywhere.”