Generative AI, ChatGPT, and the Impact on Postgraduate Work

This session, recorded on 21st April 2023, examines the impact of ChatGPT, and other developments in AI, on postgraduate education.

The panel discuss how generative AI, including ChatGPT, might challenge and enhance postgraduate work, and in particular writing as assessment, learning and digital capabilities in increasingly digital world and the ethical challenges disruptive technologies bring.


Featuring:

Professor Philippe De Wilde (Professor of Artificial Intelligence, University of Kent)

Dr Elena Forasacco (Senior Teaching Fellow, Imperial College London)

Jorge Freire (Senior Learning Designer, Universidade Católica Portuguesa)

Sarah Hussain (PhD Candidate in Bionics and Applications of AI in Prosthetic Devices, Queen Mary University of London)

Chaired by:

Professor Janet De Wilde (Director of the Queen Mary Academy, Queen Mary University of London, and Vice Chair, UKCGE)

Large language models, new tools for graduate education

We are seeing an explosion of interest in ChatGPT and similar software. The excitement is palpable among our students. After years of being (rightly) constrained by anti–plagiarism rules, here is a tool that will help them with the pain and frustration of writing.

ChatGPT is a large language model (LLM), a huge piece of software trained on a large corpus of text. There are three main players in LLMs. Google has produced BERT, LaMDA and Bard. OpenAI, in partnership with Microsoft, has produced GPT‑2,3, and 4, ChatGPT, Bing Chat and DALL‑E (for images). The third focus of activity is in China, where LLM’s such as Wu Dao (悟道) and Ernie Bot are developed by Baidu and the Beijing Academy of Artificial Intelligence amongst others. Chinese LLMs form part of an increasingly sophisticated software environment helping Chinese-speaking students at English-medium universities.

The strength of current LLMs varies, but as an order of magnitude, they have around one trillion parameters and are trained on about one trillion words (‘one million Shakespeares’). They need about 500,000 processors to run, and one training update costs from $10M to $100M. The machine learning algorithm prevents LLMs from remembering text verbatim, and this allows them to perform well on new text prompts. They predict the next word from approx. 1,500 preceding words. They are stochastic parrots, and not so different from a very well read but dull human being.

LLMs are being steadily improved and integrated with search engines that cover a larger field of data. They offer PhD students scope for help with literature review, style improvement and finding computer code. They can also give advice on how to deal with PhD supervisors, family, even an empty fridge! And all this without plagiarism. I’m sold.

References:

LaMDA: Language Models for Dialog Applications, R. Thoppilan and 59 co–authors, arXiv:2201.08239.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? E. M. Bender, T. Gebru, A. McMillan, S. Shmitchell, FAccT 21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610–623.

Gabriel Tarde, Les lois de l’imitation, 1895, Alcan, Paris.

Generative AI, ChatGPT, and the Impact on Postgraduate Work

ChatGPT is taking an important place within the HE educational system, either we like it or not. Banning the use of ChatGPT might be not the right solution since we cannot stop students from using it. The incorporation of ChatGPT into the HE system looks like the most suitable way to go. If this will be the direction we will take, as educators we will need to adjust our assessment system and teaching approach accordingly.

A possible suggestion is to shift the focus of assessments from the product to the process applied to obtain the product. Currently, our assessments and marking indicate the quality of assignments (e.g. essays); however, those assignments do not verify the students’ learning. The use of ChatGPT will further emphasise this paradox: the quality of assignments will improve, and students will achieve high grades with very little effort. As already suggested by some HE educators, in the near future we might allow the use of ChatGPT for assignments; in that case, the suggestion is to evaluate the question” asked to ChatGPT and not its final product, we will assess the process applied by students. ChatGPT produces suitable answers” only when questions” are clear and precise. Students will need to use their own knowledge to create those questions”, and educators can therefore understand the students’ learning from the questions”.

Since ChatGPT is a stochastic parrot that lacks in personality and critical thinking”, another suggestion for HE educators is to work with students to enhance their critical thinking and prompt them to apply it while writing about their projects (e.g. literature review). ChatGPT might prepare a perfect literature review, but only students with their critical thinking will add that personality” to it (e.g. when connecting the gaps in the knowledge identified with the literature review with their own projects).

As Aristotle said man is a rationale animal”: we should remember it, apply our rationality and support students with it… and ChatGPT will become a useful tool for us and students.

The Online postgraduate learner: digital learning, generative AI and disruption

The past three years have presented a series of challenges to Higher Education (HE), of which generative Artificial Intelligence (AI) is the latest one. Out of all the sociocultural and technological-driven pressures, why did generative AI impact HE so strongly? Not ignoring the paradigm-shifting technical achievement that generative AI is, from a digital learning perspective, there might be three main causes for this impact: a lack of innovation and quality in assessment design; a refusal to improve asynchronous learning and assessment practice; slowness in adapting curriculum writing, teaching and assessment practice to current student living contexts.

These factors, HE’s lack of agility to change, and an over-reliance on synchronous campus teaching, created a gap that contract cheating and now generative AI filled. Digital learning here should be read as:

  • being part of formal teaching, learning and assessment;
  • running on core platforms, and ruled by policies and ethical expectations (academic integrity, for example);
  • and also how postgraduate students live, learn and generate assessment with technology outside of formal sessions and supervision, using their own selection of tools and approaches.

The growth in distance postgraduate taught programmes – more than 3,000 today in the UK – and the adoption of blended learning practice – 81% of campus courses are offered in the UK with some blended learning elements – allied to the continuous integration of generative AI into more software, mean that banning access to the technology is not an option. The social and market needs to develop the digital capabilities of our postgraduate students also demand a forward-looking approach.

Addressing generative AI’s negative impact in assessment production is harder because it affects parts of learning covered by shared remits in HE: information literacy, academic integrity, digital learning, digital capabilities, teaching and assessment practice. Only through the joining of skills of professional services and academic colleagues can we be agile and skilled enough to deal with predictable disruptions and go through cycles of horizon scanning, synthesis and action. And only through a series of co-development approaches to policy updating, a co-design approach to course development; and through teaching, learning and assessment practice that fully explores and links the three levels of digital learning listed above can we create the conditions for effective, ethical learning and assessment to explore the benefits and overcome the challenges of generative AI.

References:

Ahsan, K., Akbar, S. and Kam, B. (2021). Contract cheating in higher education: a systematic literature review and future research agenda. Assessment & Evaluation in Higher Education, pp.1–17. doi:https://doi.org/10.1080/026029….

Bovill, C., Cook–Sather, A., Felten, P., Millard, L. and Moore–Cherry, N. (2015). Addressing potential challenges in co–creating learning and teaching: overcoming resistance, navigating institutional norms and ensuring inclusivity in student–staff partnerships. Higher Education, 71(2), pp.195–208. doi:https://doi.org/10.1007/s10734–015‑9896–4.

Cotton, D.R.E., Cotton, P.A. and Shipway, J.R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, pp.1–12. doi:https://doi.org/10.1080/147032….

JISC, 2019. Jisc digital capabilities framework: The six elements defined. [pdf] JISC. Available at:

http://repository.jisc.ac.uk/6611/1/JFL0066F_DIGIGAP_MOD_IND_FRAME.PDF [Accessed 23 April 2023]

Mantle, R. (2023). Higher Education Student Statistics: UK, 2021/22. Cheltenham: HESA.

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez–Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D. and Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence, 3, p.100056. doi:https://doi.org/10.1016/j.caea….

McCormack, M. (2023). EDUCAUSE QuickPoll Results: Adopting and Adapting to Generative AI in Higher Ed Tech. [online] er.educause.edu. Available at: https://er.educause.edu/articl…–quickpoll–results–adopting–and–adapting–to–generative–ai–in–higher–ed–tech#fn2 [Accessed 23 Apr. 2023].

Mosley, N. (2023). Online postgraduate courses in UK higher education: What’s the current picture? Neil Mosley. Available at: https://www.neilmosley.com/blo…–postgraduate–courses–in–uk–higher–education–whats–the–current–picture [Accessed 23 Apr. 2023].

Storberg–Walker, J. and Torraco, R.J. (2004). Change and Higher Education: A Multidisciplinary Approach.

Voce, Julie & Walker, Richard & Chatzigavriil, Athina & Barrand, Melanie & Craik, Adam. (2022). 2022 UCISA Survey of Technology Enhanced Learning in Higher Education.