Loading

Teaching Matters Newsletter September 2023: Five conundrums from the ChatGPT series

Five conundrums from the ChatGPT series

In March-May 2023, Teaching Matters responded to the Artificial Intelligence (AI) furore with a theme called ‘Moving forward with Chat GPT’↗️. This eight blog post series aimed to start a wider conversation about how to move forward with Chat GPT in Higher Education ethically, positively, and critically. Our site statistics reflected a strong engagement with this series with four out of the eight posts being part of the 10 most read posts of the last three months.

Image Source: Mdisk, Adobe Stock

With their introductory post↗️, Jenny Scoles, Josephine Foucher and Tina Harrison invited us to join the conversations and debates that AI technology has stirred up in the world of university learning and teaching. Their post aimed to reflect on the existential challenge generative AI poses to the very process of learning and academic culture. While the rest of the posts in the series provided various perspectives, it is important to highlight the consensus that there is no use in trying to restrict or ban use of generative AI. Rather, the advent of powerful generative AI technology should be grasped as an opportunity to boost AI literacy in the curriculum and to provide students and staff with nuanced knowledge of these technologies, so that they can be equipped to use them effectively both within the university and beyond.

In this newsletter, you will find five conundrums identified through the ‘Moving forward with Chat GPT’↗️ series:

  • Rethinking our academic culture,
  • Grappling with the difference between AI and human intelligences,
  • Disrupting our teaching and assessment practices,
  • Honouring transparency and honesty,
  • Embedding ethics in and beyond the classroom.

These are followed by our regular features: Collegiate Commentary, In Case You Missed It (ICYMI), and Coming Soon at Teaching Matters! If you'd like to keep up with Teaching Matters, make sure to sign up to our Newsletter Mailing List↗️.

Five Conundrums from the ChatGPT series

Conundrum 1: Rethinking our academic culture

Image by Gerd Altmann, Pixabay↗️, CC0

The first conundrum posed by generative AI is that it forces us to rethink our academic culture. For instance, Dr Donna Murray argued in her post, ‘Are we assuming all students will cheat?’↗️, that it would be more productive to shift our thinking from assuming students will cheat towards equipping them with tools to become active members of their academic community. For example, in terms of citing academic work, she invites us to educate our students about the underlying reason for doing this - as a practice of mutual respect for each other’s intellectual property rather than to focus on the negative aspects such as saying, ‘Don’t plagiarise’. She also urges us to look more deeply into the root causes of academic misconduct, such as the recent pandemic, which has left many students insecure about their learning. Donna summarises:

“I don’t think all students would cheat, or even that students would view cheating as less serious than we might have when we were students. However, it is important to acknowledge the huge pressures students face now and think about ways to release some of this pressure”.

In a similar vein, Tracey Madden in her post, ‘Whose essay is it anyway?↗️’, encourages us to reflect on what makes edtech tools like ChatGPT so appealing, arguing that the circulation of such tools says a lot about the pressure students are under to write efficiently and rapidly. Tracey interrogates why and how writing has been framed as a product rather than honouring the process. She urges the academic community to reflect on, and think about, the influence of edtech marketing on the students. She adds:

“If you are a confident writer and generate text on a topic in which you are an expert, you have what you need to critique this. This is not equivalent to students generating text on a topic in which they are not expert. And if we are not writing in our first language, the confidence to reject suggestions from a writing app goes down. Where is the impartial advice?"

This idea aligns with Dr Vassilis Galanos’ post ‘ChatGPTeaching or ChatGPCheating? Arguments from a semester with large language models in class (Part 1)↗️’, which highlights our turbocapitalist culture, and the frantic ‘questing for excellence’ that drives students and academics to resort to shortcut enabling tools like generative AI. Vassilis challenges the academic community to rethink our assessment culture. Vassilis asserts:

“It is telling of an academic culture that demands from students to write more essays, blogposts, and other written exams (read: “produce more text”) in shorter time intervals, and educators, teaming up with mighty digital plagiarism detectors, to assess them at ever-growing speed - at least, if they want to squeeze some time to 'produce more text' for their academic development (journal articles, research reports, other administrative documents)…I would challenge students and educators altogether to compose essays and design learning objectives and forms of assessment that demonstrate sufficient inventiveness that cannot be mimicked by machines”.

Conundrum 2: Grappling with the difference between AI and human intelligences

Image by 51581, Pixabay, CC0

In her blog post↗️, Master’s student Irene Xi offers a crucial reminder of the distinction between artificial and human intelligence. AI functions in a generative way; it is based on a succession of guesses based on absorption of large data sets rather than understanding knowledge, similar to how experiential learning works. Conversely, she argues, human intelligence is the ability to put knowledge to use:

“A person can generalise conceptual information to address issues in unanticipated areas after they have fully grasped it. It should be admitted that only humans have the ability to learn, understand, and then use newly acquired knowledge by fusing it with some skills.”

Differences in human and artificial intelligence becomes apparent with the quality of output they generate. Several blog posts in this series remind us that these technologies are only as powerful as the amount of content and data fed into them. This reflects the ‘Garbage in, Garbage out’ tech adage (flawed input leads to flawed output, the technologies’ responses are only as good as the questions or content we input) as explained by Shelagh Green, in her post ‘Can ChatGPT get you a job?: Opportunities and challenges using AI in recruitment’↗️.

She informs and cautions student users on the potential benefits and common pitfalls associated with the use of AI tools in job searches. Shelagh explains:

“Speaking to one recruiter in the tech sector recently who include significant text based assessment in their recruitment, they saw ChatGPT being used by applicants as soon as it was launched. Tell-tale signs included factual inaccuracies, an absence of context in answers and the consequence of prompt engineering, where the response is only as a good as the instruction or request made”.

This highlights the current limitations of AI generative software nudging us towards redesigning assignment prompts and courses accordingly. Vassilis shared in his blog post↗️ that some students actually found it more time consuming to correct the erroneous answers ChatGPT gave out when writing their essays that they opted out of using the technology altogether. It all crystallises to the distinction between how human intelligence learns to appraise and peruse artificial intelligence and vice versa.

Conundrum 3. Disrupting our teaching and assessment practices

Image by fotogestoeber, Adobe Stock

A third conundrum raised relates to Chat GPT’s disruptive nature; its existence and circulation makes us reckon with our teaching and assessment practices. How do we foster creativity and redesign assessments in a way that activates critical thinking? Vassilis makes a compelling case in the blog post ‘ChatGPTeaching or ChatGPCheating? Arguments from a semester with large language models in class (Part 1)’↗️ about human error as the pillar of a creative spirit. Vassilis shares:

“Before ChatGPT’s release, I have marked many student essays (and have been paid to produce many reports) that could have passed the test for being produced by a machine if inspected in light of LLMs"
"Generative AI reminds us that errors, as defined by learning expectations, are signs of creativity; while creativity often occurs by errors…. To be statistically predictable and replicable is to be average. To get an “average” score in Edinburgh’s social science (65/100) is to write statistically probable essays.”

More positively, Chat GPT’s rise could rearrange how we organise our time in class, freeing up space to focus on more interesting tasks. Prof Adam Stokes shares a concrete illustration in the blog post, “Generative Artificial Intelligence – ban or embrace?”↗️:

“In one specific course that I taught this year, we embraced ChatGPT and I would ask the tutorial questions directly to ChatGPT live in-front of the class….it was able to perform much of the work that I would have normally done as a teaching academic, leaving me either to act simply as a text-to-speech module, or indeed to be able to add more value to the class than simply solving equations. For the rest of the lecture course, I changed my tutorial style to add value over and above that which was afforded by ChatGPT. I was able to spend time drawing diagrams, discussing papers from the literature, hosting whole-class discussions etc… The use of the AI tool enabled a higher-quality teaching experience for the face-to-face component of the class”.

In that same post↗️, Prof Tim Drysdale argues that generative technologies disrupt the fairness in assessment practices, since plagiarism detection tools are unreliable in detecting plagiarism in AI generated text.

"[AI]…does not make for an even playing field amongst students, because there would be clear disadvantages on offer for those choosing to act with integrity..."

Similarly, Tracey↗️ questions the ethics of educators using generative technology to design assessment, and then the expectations that students do not use them, being contradictory and counterproductive. She argues:

"but if you wrote my assessment question with AI, why can’t my AI write the answer (and why don’t we get another AI to mark it)? If we want to talk about ethics, perhaps we need to focus on the humans behind this".

Jenny, Josephine and Tina in their introductory post↗️ discuss how the software’s use will soon become quotidian and must therefore push educators to come up with innovative assessment methods beyond the traditional essay, or even rethink assessment altogether.

4. Honouring transparency and honesty

Image by blackdiamond67, Adobe Stock

As a fourth conundrum, we highlight the importance of honesty and transparency with AI usage, which should be associated with safe and effective utilisation of AI tools and not as a move towards blanket restriction.

Further, in her post, ‘Can ChatGPT get you a job?: Opportunities and challenges using AI in recruitment’,↗️ Shelagh Green, Careers and Employability Manager, argues that the growing utility of generative edtech in our job applications highlights the need, now more than ever, for students to be transparent and honest about their use of Chat GPT for its language processing utility during recruitment processes.

On a different register, Vassilis in the blog post ‘ChatGPTeaching or ChatGPCheating? Arguments from a semester with large language models in class (Part 2)↗️’ offers a creative way for students to be transparent about using Chat GPT in their essays by inviting them to incorporate model cards’ to highlight and reflect on the technology’s benefits and limitations:

“While I was thinking over the Winter break how to incorporate-yet-deconstruct ChatGPT in class, I thought to ask students to use it actively as a helper, but also write a short “model card” in which they assess the algorithmic output in terms of originality of content, biases, and quality of references. I conducted my own experiments in advance, using some of my course’s discussion questions as prompts, to explore multiple glitches”.

Good practice examples such as this emphasises not only the importance of honesty and transparency with AI usage but also the need to create a supportive environment for students to learn and demonstrate their learning.

5. Embedding ethics in and beyond the classroom

Image by magele-picture, Adobe Stock

A final and crucial conundrum raised in the series relates to the question of how do we use generative AI technologies in an ethical way? Several of the blog posts in the series reference a shocking TIME investigation by Billy Perrigo↗️ that reveals the outsourcing of the precise labelling work of toxic content to companies in countries like Nigeria, where workers are being underpaid for work that is highly traumatic.

In response, in her post ‘Whose essay is it anyway?↗️’, Tracey compels us to think about our part in enabling such exploitative practices, which are the basis of these technologies’ success:

“Have we considered to what extent are we training this technology from interacting with it? Is it ‘free’ because we are providing the labour? What are we creating?”
“Apps like these could be seen as an attempt to increase edtech use by marketing directly to students, avoiding the oversight there would be for edtech provided by the institution. Recognising that education is being increasingly privatised, this is a way that students could be used as the means to further this. Is this what we want? Resistance is not futile.”

To finish on the ethics of using such technology, Tim Drysdale and Adam Stokes conclude with their observation that students tend to most enjoy courses that prepare them for the world of work, so it is the responsibility of teachers to bring in generative AI within the very fabric of the course and make their ethical use part of the academic grammar.

These conversations very well resonate with the five principles of AI use recently published by the Russel group of Universities - New principles on use of AI in education↗️ . These principles help shape institution and course-level work to support the ethical and responsible use of generative AI focussing on AI literacy, student support, ethical use of AI, equal access, academic rigour, integrity and collaboration.

This series↗️ was a first attempt to kick-start an on-going conversation about the future of our teaching and learning practices in a context of a rapidly changing AI landscape that, with the advent of Chat GPT, challenges the core principles of knowledge making and production. We hope to continue this conversation with a follow-up series as the landscape continues to evolve at an accelerating pace. In the meantime, stay tuned for Teaching Matter’s podcast on Chat GPT, which will feature discussions between students and academics about the history of generative technologies, the ethics around their use, and how to rethink our academic culture in a way that incorporates them critically.

Resources on Artificial Intelligence

A blog post by Cate Denial: "ChatGPT and all that follows": https://catherinedenial.org/blog/uncategorized/chatgpt-and-all-that-follows/↗️

A journal article Comparing Student and Generative Artificial Intelligence Chatbot Responses to Organic Chemistry Writing-to-Learn Assignment, by Watts et al, 2023.

And credit to Mary Jacob (Aberystwyth University) for gathering these resources below:

Collegiate Commentary

Collegiate Commentary

Sue Beckingham

with Sue Beckingham, Associate Professor (Learning and Teaching) at Sheffield Hallam University

While Teaching Matters primarily showcases University of Edinburgh teaching and learning practice, our core values of collegiality and support also extend beyond our institution, inviting a wider, international community to engage in Teaching Matters. In this feature, we ask colleagues from beyond the University to provide a short commentary on ‘Five things↗️...’, and share their own learning and teaching resource or output, which we can learn from.

Sue's Commentary on "Five conundrums from the ChatGPT series"

The open launch of ChatGPT in November 2022 has without doubt opened a floodgate of questions about its use. It quickly emerged that this was just one of many Generative AI (GenAI) tools. Both Microsoft and Google have competed to release their own versions alongside a swathe of others providing tools to not only generate text but also to create images, presentations, videos and debugging code. Suffice to say when we talk about generative technology, ChatGPT will be one of a number of chosen tools available in our digital toolbox.

Many concerns have been raised from academic integrity, copyright, misinformation, disinformation and bias, unethical practice training these large language models to refuse inappropriate requests, to the environmental impact developing and using the tools. On the flipside many have been waxing lyrical about the potential these technologies can have in terms of productivity (save time) and performance (improve quality).

Whilst illuminating, and indeed helpful, it is evident that many educators feel completely overwhelmed. Having facilitated several webinars in the last year, it is evident that we come with very different experiences. Polls conducted indicate a 50:50 split with half having never used GenAI. We will all benefit (staff and students) by engaging in ongoing CPD - on what GenAI is, how it can used and how it shouldn’t be used in the context of learning and teaching.

We need to engage in conversations with our students about appropriate and ethical use of GenAI and must not make assumptions that all students will know how it works and what the shortfalls are.

So if we are to use these technologies and prepare staff and students to do so safely, then the five conundrums identified are an excellent starting point to prepare ourselves. Here are some of my thoughts to add to the discussion.

1. Rethinking our academic culture

We need to reconsider what we are assessing and why we are assessing. There is the potential to bring GenAI into formative assessment to scaffold the process of assessment for learning, documenting and reflecting on the process of assessment and valuable skills development.

Whilst we know that every new technology introduced from the printing press to modern day digital technology has the potential to be a disruptor, it is important to acknowledge that GenAI needs to be fact-checked. Nature (24 January 2023↗️) made it clear in their article that LLM tools should not be cited or attributed as an author. Furthermore, whilst attempts to put guard rails in place, these should not be assumed to be secure and inappropriate data may be presented. Sam Altman, CEO of OpenAI who created ChatGPT, stated in an interview that although the new version (GPT-4) was “not perfect”, it had scored 90% in the US on the bar exams, and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages” (Guardian, 17 March 2023↗️). Conversely, he also highlighted concerns including the perpetuation of disinformation.

Supporting our student to develop critical fact checking skills is vital so that they can learn to identify misinformation (inaccurate) and disinformation (deliberately aimed to cause malicious damage).

2. Grappling with the difference between AI and human intelligences

We need to look at and discuss the ethical, legal, and social implications of using GenAI with our students. Privacy concerns, GDPR compliance↗️ and copyright infringement are a concern. Claude↗️ is described as an AI assistant. It allows you to upload PDFs and you can ask it to summarise the PDF document. Consideration needs to be given as to who has copyright ownership of this document (See Copyright policies of academic publishers↗️). The environmental impact is another area to discuss and find human solutions to rectify this.

3. Disrupting our teaching and assessment practices

Universities provide their students with Microsoft Office. If we are to use other technology, we need to be sure there is equitable access. If you are going to consider introducing GenAI tools in learning and teaching, it is vital students are directed to use tools that are free to access. Whilst Microsoft 365 CoPilot promises enticing enhancements, how many will be able to afford to commit to $30 per user, per month↗️?

4. Honouring transparency and honesty

We need to be transparent about how we are using these tools. As Tracy Madden points out in her post↗️, if academics are going to use generative technologies (which they are) then why wouldn’t we teach our students to use such tools to enhance their productivity and performance? Whilst academics might develop initial drafts to develop or enhance learning outcomes, assessment briefs and criteria, class activity outlines and activities, what might we consider that would be beneficial for students?

5. Embedding ethics in and beyond the classroom

The conversation around ethics needs to take place in the classroom, with ground rules established, discussed, and agreed. How might these be shared and developed further for the benefit of all? The University of Sydney has worked with students as partners to develop a useful resource Supporting students to use AI responsibly and productively↗️.

At my own university, we have updated the Academic Conduct regulations and provided new guidance for our students. Given the fast pace of developments in this area, I am sure we will continue to update guidance. The regulations now explicitly refer to artificial intelligence:

Contract cheating/concerns over authorship: This form of misconduct involves another person (or artificial intelligence) creating the assignment which you then submit as your own. Examples of this sort of misconduct include: buying an assignment from an ‘essay mill’/professional writer; submitting an assignment which you have downloaded from a filesharing site; acquiring an essay from another student or family member and submitting it as your own; attempting to pass off work created by artificial intelligence as your own. These activities show a clear intention to deceive the marker and are treated as misconduct.

New guidance provides examples of how generative artificial intelligence might be used. For example:

  • Answering questions where answers are based on material which can be found on the internet.
  • Drafting ideas and planning or structuring written materials.
  • Generating ideas for graphics, images, and visuals.
  • Reviewing and critically analysing written materials to assess their validity.
  • Helping to improve your grammar and writing structure – especially helpful if English is a second language.
  • Experimenting with different writing styles.
  • Getting explanations.
  • Debugging code.
  • Getting over writer’s block.

However, the guidance also highlights the limitations and drawbacks of using AI. Whilst easy to use, it is important to remember they can provide misleading or incorrect information.

They can offer shortcuts that reduce the need for critical engagement, a key to deep and meaningful learning, but students need to be aware of the difference between reasonable use of such tools, and at what point their use might be regarded as a way of avoiding necessary thinking.

The guidance also emphasises the artificial and human intelligence are not the same. AI tools do not understand anything that they produce, nor do they understand what the words they produce mean when applied to the real world.

To support this, we are also working on a new online academic integrity mini module that can be embedded within a course or signposted as a self-directed activity. This video aimed at students talks about ChatGPT and academic integrity↗️.

Useful resources:

About Sue: Sue Beckingham is an Associate Professor (Learning and Teaching), a National Teaching Fellow, Principal Lecturer in Digital Analytics and Technologies, and a Learning and Teaching Portfolio Lead at Sheffield Hallam University. She is also a Certified Management and Business Educator, a Senior Fellow of the Higher Education Academy, a Fellow of the Staff and Educational Development Association, and a Visiting Fellow at Edge Hill University. Her research interests include social media for learning and digital identity, groupwork, and the use of technology to enhance learning and teaching; and has published and presented this work nationally and internationally as an invited keynote speaker. She is a co-founder of the international #LTHEchat 'Learning and Teaching in Higher Education Twitter Chat↗️' and the Social Media for Learning in HE Conference @SocMedHE.

In case you missed it (ICYMI)

From June-July 2023, Teaching Matters featured a Hot Topic series: Showcasing the Edinburgh Futures Institute ↗️ - check out the most popular blog post from this series: Welcome June-July’s Hot Topic series: Showcasing the Edinburgh Futures Institute (EFI)↗️ by Mike Bruce, Education Development Manager in EFI.

In July-Aug 2023, Teaching Matters ran a series on Learning & Teaching conference 2023↗️ that showcased reflections and experiences of the presenters and the participants who attended and enjoyed the conference. You can access the keynote talks and presentations from the Learning & Teaching Conference on the 2023 Conference Website.

Currently Teaching Matters is featuring two interesting series: Student Partnership Agreement↗️ (Hot Topic) and Embedding enterprise in the curriculum↗️ (Learning and Teaching Enhancement Theme).

Don't forget to read our recent extra posts:

Do check out the latest Institute for Academic Development Practical Strategies workshops, including 'Getting started in Course Design', and 'Helping your students learn'. Book a place on the website: Practical strategies series.

Coming soon at Teaching Matters

🎨 Join Our Student Illustration Competition! Unleash Your Creative Spirit 🖌️

Are you a budding artist with a passion for illustration? Do you dream of showcasing your talent to the world? Look no further! We invite all students to participate in our exciting Student Illustration Competition and let your imagination run wild.

Teaching Matters, The University of Edinburgh

Upcoming blog and podcast themes

Up next, Teaching Matters is set to feature two new series: 10 years of MOOC and Student-Staff co-design in building an undergraduate course. Stay tuned to Teaching Matters↗️ for extra posts and podcasts surrounding widening participation, AI and well-being! More to follow!

Please get in touch if you would like to contribute to one of these blog series or podcasts: teachingmatters@ed.ac.uk↗️.

Recent podcast series:

Podcast in Education, a conversation with Emily O'Reilly & Andrew Strankman: Episode 1↗️ and Episode 2↗️.

Want to keep in touch?

Sign up for our email mailing list!

If you would like to contribute to Teaching Matters, we'd love to hear from you: teachingmatters@ed.ac.uk↗️