(Revised: 8/4/23)
Introduction
In late November of 2022 OpenAI made the large language model (LLM) ChatGPT 3 widely available to the public for the first time. Within two months, it was the fastest adopted piece of technology ever, with over 200 million users. Since that release, numerous advancements have been made, dozens of additional products have been released, and many more people are using artificial intelligence (AI) for an increasing number of tasks. While worries about plagiarism drove much of the early conversation in academic settings, it has become clear that these tools will influence our work well-beyond those initial concerns. AI will impact teaching and learning in profound ways.
Purpose of the document
This document is being shared to (a) inform us about AI, (b) identify the opportunities and challenges it presents, and (c) situate the technology within our work as educators and researchers. The intention is clear communication and collaboration as we begin to learn how AI will impact our work. The goal is to educate and inspire discussion.
A significant challenge for us will be resistance to change. While most people have now heard of ChatGPT and LLMs, aside from early adopters, few have tried or regularly make use of these tools. It is important to recognize the need to learn about these systems, their potential impact on the fields our students are entering, and their impact in our own disciplines. The rapidly evolving landscape of AI warrants our thoughtful exploration now. Passive observation risks leaving us outpaced and ill-prepared to leverage these powerful tools for teaching and learning.
The DLSI will make every attempt to provide current and up-to-date information. However, this is a rapidly changing landscape. To remain current, bookmark and refer often to the DLSI’s collection: AI, LLMs, and Higher Education. It is also a good idea to develop your own resources centered on your discipline and area of expertise. Preserve a link to this document, as it will also be updated as we learn more. (Note: Be aware that many papers in the “AI, LLMS, and Higher Education” collection are hosted on the open-source, arXiv website. Many of these are working papers and not yet peer-reviewed. Proceed with caution.)
What is AI?
Artificial intelligence (AI) is a specialization within computer science that focuses on creating systems capable of performing tasks that traditionally require human intelligence. These tasks include learning from experience, understanding natural language, recognizing patterns, and decision-making.
Large Language Models (LLMs) such as ChatGPT, Bard, and Claude, are types of AI that have been trained on extensive text data. They generate human-like text by predicting the next word in a sequence based on the previous words. This makes them capable of answering questions, writing essays, summarizing texts, and translating languages. However, it is important to note that they can occasionally “hallucinate” (produce outputs that seem plausible but may not be accurate or factual). Therefore, their output must be consistently checked for accuracy.
AI is not limited to textual tasks. Tools like DALL-E and Midjourney use AI to create images that can rival actual photographs. Furthermore, AI can generate realistic speech and videos, leading to the creation of “deep fakes” (deceptively convincing manipulations of audiovisual data). Given the prevalence of such technology, it’s essential to verify the authenticity of images, audio, or video encountered on the internet.
While these tools offer exciting possibilities for enhancing and automating many aspects of our work in higher education, they also present significant challenges. For faculty, these challenges include (a) understanding how to effectively integrate these tools into the classroom, (b) addressing potential ethical, privacy, and validity concerns, and (c) navigating the changes AI will bring to traditional teaching roles and methods. It’s important to approach these technologies with both an open mind to their potential and a critical eye to their implications. We will have to change many of our practices.
How do these models work?
You are probably already interacting with AI without even realizing it. When you ask Siri or Google assistant a question, that’s AI. When Netflix recommends a show you might like, that’s AI. When your bank calls to let you know someone may have used your credit card without your permission, AI figured that out, and probably even placed the call to you. The spam filter in your email uses AI to identify and remove unwanted messages. These are just a few examples of how AI has become integrated into our daily lives.
LLMs work by analyzing the context of a given input (like a statement or a question) and predicting the most likely subsequent word or phrase. They make these predictions based on patterns observed in the vast quantities of text data they’ve been trained on. They are not conventional software programs, but rather sophisticated text prediction algorithms. When connected to the internet, LLMs can access external information and databases to enhance their outputs, making them significantly more robust.
There are no user manuals and their capabilities are both impressive and largely unknown, raising serious ethical, and indeed, existential questions. Be that as it may, it is also clear that the rapid deployment and adoption of these tools means that, not only are LLMs here to stay, but it is now impossible to identify where AI-generated output begins and ends.
Understanding the LLM Landscape
Soon after the release of ChatGPT-3, the model was updated to version 3.5, and now version 4. These were significant improvements that led to more accurate responses and fewer hallucinations. Microsoft invested $10 billion in the technology and has since integrated it into its Bing search engine, connecting the LLM to the Internet. Further, the company plans to fully integrate ChatGPT technology into its entire Office 365 suite (MS 365 Co-Pilot). Google followed suit with the release of Bard, based on its own LLM called Palm-2 (also able to access the Internet), along with similar plans to integrate LLM/AI technology into all of its productivity tools (docs, sheets, mail…etc.).
OpenAI still offers free access to ChatGPT 3.5, and subscription access to ChatGPT 4. Subscribers are currently gaining access to web-browsing capabilities (now in cooperation with MS and using Bing), and numerous plugins that allow it to interact with other web services and databases. In July, ChatGPT made “code interpreter” available to Plus subscribers, providing AI with a general purpose toolbox to solve problems, and creating yet another leap in the capabilities of this LLM.
Anthropic has recently begun making its LLM, Claude, publicly available.
Numerous additional tools, which are being built on these technologies are also appearing daily. These come in the form of web pages, plugins for existing LLMs, and smartphone apps. It is also important to note that some of these tools come and go, as companies may make them available for a short time for beta testing, then restrict access for a time. Some tools discussed in this document may not be available when you look for them.
AI in higher education: Potential applications
In 2016, a Georgia Institute of Technology professor developed Jill Watson, an early question-answering chatbot, and deployed it in an online discussion forum for a graduate computer science course. In fact, empirical studies on AI in education go back to 1993. The recent leap in capabilities, capacity, and accessibility is now rapidly accelerating significant shifts in our educational spaces. Here are just a few potential applications:
Writing Assistance: AI can provide valuable assistance in the writing process. For instance, it can offer real-time grammar and style corrections, suggest improvements for clarity and conciseness, and even help with citation management. LLMs can offer and/or clarify thesis statements, generate outlines for essays or research papers based on a given topic, provide suggestions for argument development, or offer paraphrasing assistance to avoid plagiarism. This can be particularly helpful for students who are developing their academic writing skills or for whom English is a second language.
That said, the writing process itself is about to be disrupted in a significant way. Google and Microsoft are integrating LLMs into Docs and Word so that every blank page will begin with an option to get writing assistance from AI.
Coding: AI has the ability to write and troubleshoot computer code in many coding languages. One can create useful programs, even with no knowledge of coding, simply by telling the AI what you want to do, asking it how and/or if it can help, then troubleshooting together.
Research Assistance: AI can automate literature reviews and analyze data. It can also help find patterns and insights in large datasets that would be difficult for us to detect alone.
Content Creation: AI can assist in creating educational content tailored to the needs of individual students or instructors, making learning more engaging and effective. For example, it can generate quiz questions based on course material or create summaries of complex texts. It can create case studies, lesson plans, and suggest activities to engage learners. Here are some additional ways one might use AI in class. Further, creative prompting can lead to some very useful learning tools, as exemplified here.
Virtual Tutoring: Large language models can serve as virtual tutors, answering students’ questions and explaining concepts in varying levels of complexity. They can guide students through problem-solving processes. They are available 24/7, providing help whenever students need it.
Personalized and Adaptive Learning: AI can adapt to individual preferences and pace. It can identify areas where a student is struggling and provide additional resources or exercises, providing a personalized learning experience that can lead to better understanding and retention.
Translation and Language Learning: AI is very good at translation and can be prompted to explain grammar rules, tutor, and quiz language learners.
Automated Grading: AI may be able to automate some grading tasks, such as following rubrics, analyzing the quality of writing, and scoring multiple-choice exams. It can also provide instant feedback to students.
Predictive Analytics: AI can analyze student data to predict outcomes such as dropout rates and performance, helping institutions take proactive measures to improve student success.
Administrative Tasks: AI can automate various administrative tasks such as scheduling, emailing, and responding to frequent student inquiries.
Mental Health Support: There are numerous anecdotal examples of AI chatbots providing mental health support to people and this area is being studied more broadly (working paper).
Accessibility: AI can help make education more accessible for students with disabilities. For example, it can transcribe lectures for deaf or hard-of-hearing students, or read out text for visually-impaired students.
Career Counseling: AI can analyze a student’s interests, skills, coursework, and performance to suggest potential career paths and the steps needed to achieve them. It could also provide information about relevant internships, job openings, and graduate programs.
Challenges and considerations in using AI
We are facing several imminent challenges. These challenges are interconnected. They will require investigation and collaboration.
Data Privacy Concerns: With the rapid acceleration of AI use, vast amounts of data are being collected and analyzed. As a result, data privacy should be a top priority. Infringements can have serious consequences, including legal penalties and loss of trust among students and faculty. Maintaining data privacy will become increasingly complex and important. While many of these AI tools offer privacy settings, given the potential risks and complexities surrounding data security, it’s currently recommended to avoid entering personally identifiable or sensitive information into these systems.
Ethical Considerations: As AI begins to play a larger role in higher education, ethical considerations, especially around bias and decision-making, become more critical. AI systems and use of AI should be fair and transparent, and ensure that no students are disadvantaged. The implications of AI decision-making on students’ lives, particularly in areas like admissions and grading, should be carefully considered. Several other ethical considerations should be discussed among faculty and with students.
Technical Expertise: There are no user manuals for these tools and their capabilities are shifting and developing rapidly. This makes our collective knowledge and experiences invaluable. We should actively collect and share our insights to foster our understanding of these tools’ use in the classroom and in our academic work. ChatGPT now allows sharing of prompts and chats. A developing collection of prompts will be archived here. You can forward prompts that you develop or find useful to the DLSI for possible inclusion.
Simultaneously, it is crucial for our institution to recognize the need for sustained investment in staff development and infrastructure. As AI technologies advance, our ability to leverage them effectively will depend on both our shared expertise and institutional support for ongoing technical skill development.
Steps you can take now
Do Nothing: You can do nothing and assume that this will work itself out or that it is not as significant a development as some believe. However, many of your students (whether they admit it or not) will be using AI to facilitate their work in your classes. This will be difficult to police, because tools that claim to detect AI don’t work (see below), and it is easy (and more effective) to get significant assistance from AI in a way that doesn’t trigger text detectors. With some students using AI and others not, an inequitable learning situation will arise in your classes. It will become difficult to assess whether or not all of your students are actually learning what you intend.
Educate Yourself: There are many ways to learn about AI. The DLSI’s Newsletter (beginning with Volume 2 Issue 1) and AI, LLMs, and Higher Education are good places to start, but there are also online courses and presentations that are easy to find. Penn/Warton professor, Dr. Ethan Mollick has been at the forefront of using AI in Academia, as well as thoughtfully considering its implications. He is worth following on X, and also publishes “One Useful Thing” regularly on Substack. Pay attention to both the opportunities and the ethical issues that these tools present. Prepare to discuss both with your colleagues and your students.
Use AI: It will take between 5 and 10 hours to begin to get a sense of their capabilities and behavior. Most people use a single prompt and are often disappointed when the output is not what they had hoped, because they make some common errors. These are not designed to be Google-like answer machines. Interacting with AI, like ChatGPT, is a bit like having a conversation. The more specific and clear your questions or prompts are, the better the AI can respond…but don’t stop at the first question! Engaging in a back-and-forth dialogue helps the AI understand the context better and allows you to explore the topic more deeply. One helpful analogy is to imagine that you were just given a personal intern, and it is time to onboard them. Additionally, use multiple tools to compare, and use them for as many of your daily tasks as possible to see where they fit. There is more on engaging with LLMs below.
Attend Workshops and Presentations: Look for DLSI sponsored workshops and/or ask the DLSI for a department or individual consultation. Engage with one another within your schools and departments. Look for online, often free, distance workshops.
Read Academic Literature: There are links to academic sources on AI, LLMs, and Higher Education, but it would be wise to begin reviewing the literature in your field to understand how colleagues are using AI. (Note: Be aware that many papers on the DLSI site are hosted on the open-source, arXiv website. The url is noted at the top. Many of these are working papers and not yet peer-reviewed. Proceed with caution.)
Address AI use in courses and programs
We need to learn how to work with AI and how to integrate it into our courses and programs, taking advantage of the teaching and learning opportunities it provides. It is not possible to simply state that the technology is off-limits. Your students will be using AI tools, whether they admit it to you or not. It will be necessary to address AI use as a course begins. This means adding a syllabus statement that addresses AI use and potential academic integrity violations, but also teaching your students about the tools (or learning together) so that everyone has equitable access.
The approach taken here seems to be a reasonable starting point.
Here is a sample syllabus statement: This semester you will learn about emerging AI tools and will be expected to make use of them to facilitate your work. I will expect you to make this use transparent by including your prompts, a description of how you used the output, and an acknowledgement of the support of AI. You are solely responsible for adhering to La Salle University’s academic integrity policy. Submitting work that is not your own is a violation of that policy. If you have questions about academic integrity and your use of AI, please ask me to clarify.
AI-generated-text-detectors, such as Turnitin or GTPZero, are NOT accurate or effective (see also this) and are now shown to be biased against non-native English speakers. They identify far too many false positives. With just a little bit of effort and working interactively with AI, these detectors are easily defeated (and actually provides better results for the user). It is virtually impossible to tell where AI begins and ends, especially given its imminent integration into our most widely-used productivity tools. In addition, LLMs, like ChatGPT are NOT able to identify whether text was created by AI. These systems cannot be used for detection.
This raises a significant issue for dissertations, theses, and capstone or culminating projects. AI must be considered in any program that has a culminating research/writing experience and a plan for incorporating its use must be addressed, as students will need guidance on how to incorporate and acknowledge AI use.
Style manuals, such as the American Psychological Association (APA) provide guidance on citing the use of ChatGPT, however this advice misses several significant points, and gets to the heart of a critical misunderstanding and misuse of the technology. First, the least helpful way to use LLMs is to copy and paste text they produce from a simple query. It is often of lower quality, not in the voice of the author, may contain hallucinations, and can’t be reproduced or returned to. Secondly, these style guides don’t require the inclusion of the version or type of LLM. Models and versions are quite different. Finally, they don’t cover the more useful scenarios of LLM use, such as (a) feedback on an outline, (b) style guidance, (c) ideas, (d) help simplifying or adding detail, (e) help with description, and (f) adding examples, among many other creative uses. Supporting the citation of LLMs in this way could make it appear as if the information provided is authoritative, and able to be returned to at a later date.
Perhaps a more helpful strategy is to supply the prompts entered into an LLM, and a brief discussion about how the resulting output was used. Including this information in the product would allow those prompts to be reproduced. One might also include a statement of acknowledgement, much like acknowledging the input of a colleague. In any case, it will be important to consider if, when, and how the use of LLMs is acknowledged in the work that is produced with its assistance.
Conclusion
The role of AI in our personal and professional lives will increase and impact us in both exciting and profoundly challenging ways. We are at a point where it is not only necessary to understand AI, but also actively engage with it. Consider how you will incorporate AI into your work with students. Engage with the tools available, engage in discussions and debates, and immerse yourself in the literature. Meeting this new challenge depends on our active participation and collaboration. Start with one tool from the list below, try it out, and share your experience.
Above all, this moment provides us with an opportunity, as Rosenzweig wisely suggests, to, in spite of AI, “rethink how we assess our students and how we define academic success in a system that seems poised to incentivize relying on machine-generated writing…” Donahoe further challenges us to ask several fundamental questions that challenge our teaching practices, including, “What structural conditions would need to change in order for AI to empower, rather than threaten, teacher and learners?”
AI tools
This is far from an exhaustive list. New applications and websites are coming online daily. Many are free to try and use, and some provide more for a fee. Here I’ve highlighted the most significant and widely available tools. The list begins with the basic transformer models.
Please note: While ChatGPT, Bing, Bard, and Claude do similar things, they do have different “personalities” and abilities. It is instructive to use more than one system, comparing the results. While hallucinations (the tendency to generate outputs that may seem plausible but are not accurate or factual) are being reduced across the board, they still happen regularly, so it is critical that all output is verified.
Large Language Models
ChatGPT: Free access to ChatGPT 3.5. Free iOS app. Plus subscription for access to ChatGPT Plus (GPT 4), which is significantly better at understanding and provides more comprehensive output with fewer hallucinations. It can also access web browsing, plug-ins, and make use of “code interpreter.”
Bing Search: Free to access. Connects ChatGPT to the Internet. Bing has three modes: creative, balanced, and precise. Creative mode makes use of ChatGPT 4. MS plans to integrate ChatGPT into its MS 365 suite of tools.
Bard: Google’s LLM, Palm 2, integrated with Google search, and soon to be integrated into Gmail and the rest of the Google suite of tools.
Claude: Anthropic’s LLM. As of this writing, one can request access.
How to use the tools
Simply begin by typing a prompt or question into the input field. The more detailed the input, the more likely it is that you’ll receive a satisfactory response. If your prompt is not clear, the AI might not know exactly what you want. As you engage in a back-and-forth with the AI, you will want to clarify your point or ask the AI to clarify its response. Think of your first prompt as setting the scene. Build your next prompts on this foundation. Multi-step prompts also work. It is often useful to save chats so that you can return to them and build on the topic. If you need to change topics, begin a new chat.
For example, you might begin by prompting: “I’m planning an interactive class activity to help my students understand the impacts of climate change on global biodiversity. The class is composed of 30 students and we have a 90-minute class period. Can you suggest an engaging group activity that involves research, discussion, and presentation?” ChatGPT will offer a response. As you engage, guide the conversation from there. You might ask ChatGPT to adjust the complexity, the amount of time, the kind of activity, the level of difficulty, whatever makes sense for your situation. You may need clarify or ask your question in a different way. Use follow-up questions to dig deeper or nudge the AI to be more creative.
Do not think of these tools as search engines or answer machines. The more you use them, the better you will understand how they behave and what they can and cannot do. This is a useful guide. As of this writing, ChatGPT 4 seems to be the best of the bunch in understanding nuance and detail in prompts. Its output tends to be more comprehensive, especially with more involved tasks. Bing, with access to the internet, can be directed to look for existing information and work with documents and websites. Anthropic’s Claude, is particularly useful when examining more lengthy PDFs. Each models has its own strengths and weaknesses, and requires experience to understand how best to employ it. However, it remains critical that all output is verified.
The video series produced by Wharton Interactive is a helpful introductory guide.
Image Generation
DALL-E: Open AI’s image generation tool. Use it as a stand-alone site, but it is also integrated into Bing. Prompt Bing by asking it to create an image, along with a detailed description of what you want.
Midjourney: Currently the model most capable of creating photorealistic images.
Web Applications
ChatPDF: Upload a pdf, then use AI to summarize and or query the document. Microsoft’s Bing can also do this by opening a pdf in the browser window, then asking questions about the document in the window. ChatGPT Plus has plug-ins that also allow this functionality.
Connected Papers: Enter an academic paper. AI then finds connected and related papers and presents them in a visual overview.
Elicit: Uses AI to search academic literature based on a research question.
Semantic Scholar: AI-powered research tool for scientific literature.