Ethical Considerations of Artificial Intelligence (AIEd) in the Academic Context: Balancing Innovation and Responsibility

Ethical Considerations of Artificial Intelligence (AIEd) in the Academic Context: Balancing Innovation and Responsibility

Lewis-MoraraAuthor: Lewis Morara
Lawyer, Allamano & Associates

2022 was a significant year for the advancement of artificial intelligence (AI).[1] The emergence of ChatGPT marked the end of the year. In the early weeks of 2023, Microsoft expressed interest in investing $10 billion in OpenAI, Chat GPT’s parent company.[2] This investment aimed to expedite the widespread adoption of AI in various industries.[3] It entailed integrating Chat GPT into everyday tools like Microsoft Suite.[4] This aligns with projections indicating that the global AI market’s revenue will grow at a rate of 19.6% annually, reaching $500 billion this year.[5] As AI becomes increasingly prevalent, there is a corresponding emphasis on regulatory measures. The events of 2022, including the EU’s adoption of the AI Act in December[6], the United States’ AI Bill of Rights in October[7], the UK’s AI Regulation Policy Paper in July[8], and China’s enforcement of the Algorithmic Recommendation Management Provisions in March, have set a robust precedent for the future.[9]

Against this backdrop, the continuing notoriety and use of the AI (narrow AI) in the world of academia has solicited mixed reactions. Arguably the benefits of AI in Education (AIEd) cannot be gainsaid. Muhammad Ali Chaudhry and Emre Kazim, posit that in the AIEd, various areas of focus have emerged, encompassing four primary subdomains.[10]

Firstly, streamlining teachers’ workload, where AIEd strives to alleviate teachers’ burden while ensuring learning outcomes remain unaffected.[11] Secondly, tailored learning experiences, where recognising the unique learning needs of individual students, AIEd aims to deliver personalised and contextualised learning experiences, catering to their specific backgrounds and circumstances.[12] Thirdly, transforming assessment approaches, where AIEd seeks to revolutionise the assessment process by enhancing the comprehension of learners.[13] This entails not only evaluating their knowledge but also understanding how they learn and identifying effective pedagogical strategies for them. Lastly, intelligent tutoring systems (ITS), where AIEd endeavors to create intelligent learning environments where students can interact with advanced tutoring systems. These systems offer personalised feedback, engage in interactive exchanges, and facilitate a deeper understanding of specific subjects.[14]

With the numerous benefits which have now, seemingly become ubiquitous, stemming from the AIEd subdomains; Could AI be an aid in the rise of academic dishonesty especially in Kenya? Based on the most recent data, ChatGPT has accumulated a user base of more than 100 million individuals with 59.67% of users being males, and 40.33% females as at 7 September 2023.[15]Additionally, the website is experiencing substantial traffic, with an impressive 10 billion all-time page visits. The site also gets over 1 billion page visits every month.[16] The remarkable growth in user base and website traffic was accomplished within an unprecedented three-month timeframe, specifically from February 2023 to April 2023.[17] Kenya, in the recent past has hit international headlines for a number of reasons. One of the reasons is the global online essay mill industry, which has been on the rise in c. What is colloquially referred to as ‘academic writing’, is in essence cheating.[18] Ironically, what is considered as an academic infraction is facing competition from AI. In essence, the ‘academic writing’ industry is being threatened as the ‘clientele’ opts for AI.[19] However, does the problem entirely lie with students?

Many higher education institutions do not currently account for generative AI in their policies on academic dishonesty. Who bears the ultimate responsibility? Should institutions go back to the drawing board? Should there be an outright ban of the use of generative AI in academia? Developers of such generative AI such as Chat GPT are on the right track. The End User license agreement makes provision for ‘Classifiers’.[20] Classifiers such as the OpenAI AI text classifier can be helpful in detecting AI-generated content. While classifiers like the OpenAI AI text classifier can aid in identifying AI-generated content, their effectiveness is not foolproof.[21] These tools can generate both false negatives, failing to recognise AI-generated content, and false positives, mistakenly flagging human-written content as AI-generated.[22] Moreover, students may quickly adapt and modify AI-generated content to avoid detection. It’s important to note that the OpenAI AI text classifier has a limited scope and is not designed for detecting plagiarism from other sources, such as copied text from the internet.[23]

Given these constraints, it is crucial to view classifiers or detectors as only a single component within a comprehensive approach to investigating content sources and evaluating academic integrity and plagiarism. It is vital to establish transparent guidelines in consultation with students to clarify acceptable practices and prohibited actions in their assignments.[24] By doing so, students can fully comprehend the boundaries and potential ramifications associated with incorporating model-generated content into their work.[25]

Higher institutions of learning should actively participate in embracing and implementing advancements that incorporate the use of AI, whether generative or not, in the field of academia. Here are some key considerations for higher institutions to promote the adoption of AI.[26] This could be achieved through the integration of AI into the curriculum, ensuring that students are equipped with the knowledge and skills required to understand and leverage AI technologies in their respective fields.[27] In the field of research and innovation, the faculty and students should be encouraged to engage in research and innovation related to AI which can foster new ideas and applications within academia.[28] Institutions can provide support, resources, and funding for AI-driven research projects to advance knowledge and contribute to the development of AI technologies in academic contexts.[29]

At the heart of the conversation should be the ethical considerations where institutions must actively address ethical concerns associated with AI, such as bias, privacy, and transparency.[30] Integrating discussions on ethics and responsible AI practices into academic programs can ensure that students and researchers are aware of the ethical implications and can develop AI solutions with social and ethical considerations in mind.[31] By actively embracing and participating in the advancements of AI in academia, higher institutions of learning can empower their students, foster innovation, and stay at the forefront of the evolving educational landscape.[32]

[1] Holistic AI, The State of Global AI Regulations in 2023 <https://www.googleadservices.com/pagead/aclk?sa=L&ai=DChcSEwja2bnCv67_AhWW69UKHbzYAocYABABGgJ3cw&ohost=www.google.com&cid=CAASJuRomcScmam-jok8AIHRJXHx8elfKq7FjxRrDCG9AWXEDru4f9vK&sig=AOD64_1uaLuETYcIHGc5_ce4ZFOBOMzc-g&q&adurl&ved=2ahUKEwixj7TCv67_AhUwTaQEHXzIAVcQ0Qx6BAgLEAE> Accessed 20 September 2023.

[2] Nicolas Petit, ‘Law and Regulation of Artificial Intelligence and Robots: Conceptual Framework and Normative Implications’ 2017 Working paper < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931339> Accessed 20 September 2023.

[3] Ibid.

[4] Ibid.

[5] Supra note 1.

[6] The European Union Artificial Intelligence Act, COM/2021/206 final.

[7] AI Bill of Rights <https://www.whitehouse.gov/ostp/ai-bill-of-rights/> Accessed 23 20 September 2023.

[8] Department of Science Technology and Innovation, A pro-innovation approach to AI regulation, <http://www.gov.uk/official-documents> Accessed 20 September 2023.

[9] Holistic AI, The State of Global AI Regulations in 2023 <https://www.googleadservices.com/pagead/aclk?sa=L&ai=DChcSEwja2bnCv67_AhWW69UKHbzYAocYABABGgJ3cw&ohost=www.google.com&cid=CAASJuRomcScmam-jok8AIHRJXHx8elfKq7FjxRrDCG9AWXEDru4f9vK&sig=AOD64_1uaLuETYcIHGc5_ce4ZFOBOMzc-g&q&adurl&ved=2ahUKEwixj7TCv67_AhUwTaQEHXzIAVcQ0Qx6BAgLEAE> Accessed 20 September 2023.

[10] Chaudhry M.A. and Kazim, E, Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2, 157–165 (2022) < https://doi.org/10.1007/s43681-021-00074-z > Accessed 20 September 2023.

[11] Ibid.

[12] Ibid.

[13] Ibid.

[14] Ibid.

[15] Demand Sage, 32 Detailed ChatGPT Statistics <https://www.demandsage.com/chatgpt-statistics/#:~:text=ChatGPT%20has%20over%20100%20million,by%20the%20end%20of%202023> Accessed 20 September 2023.

[16] Ibid.

[17] Ibid.

[18] BBC, The Kenyans who are helping the world to cheat < https://www.bbc.com/news/blogs-trending-58465189> Accessed 20 September 2023.

[19] The Standard, Kenya’s academic writing industry threatened as students opt for AI <https://www.standardmedia.co.ke/health/education/article/2001472075/kenyas-academic-writing-industry-threatened-as-students-opt-for-ai > Accessed 20 September 2023.

[20]Chat GPT, Academic dishonesty and plagiarism detection <https://platform.openai.com/docs/chatgpt-education/academic-dishonesty-and-plagiarism-detection> Accessed 20 September 2023.

[21] Ibid.

[22] Ibid.

[23] Chat GPT, Academic dishonesty and plagiarism detection <https://platform.openai.com/docs/chatgpt-education/academic-dishonesty-and-plagiarism-detection> Accessed 20 September 2023.

[24] Chaudhry M.A. and Kazim, E, Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2, 157–165 (2022) < https://doi.org/10.1007/s43681-021-00074-z > Accessed 20 September 2023.

[25] Ibid.

[26] B. Yandle and R.E. Meiners, and J.H. Adler, and A. P. Morriss, “Bootleggers, Baptists, and E-Cigarettes” (January 1, 2015). Case Legal Studies Research Paper No. 2015-3. Available at SSRN: http://ssrn.com/abstract=2557691.  Accessed 20 September 2023.

[27] Carlos Zednik, Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence,2022 < https://arxiv.org/pdf/1903.04361 > Accessed 20 September 2023.

[28] Chaudhry M.A. and Kazim, E, Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2, 157–165 (2022) < https://doi.org/10.1007/s43681-021-00074-z > Accessed 20 September 2023.

[29] Ibid.

[30] Cecil Yongo Abungu, ‘Democratic Culture and the Development of Artificial Intelligence in the USA and China’ The Chinese Journal of Comparative Law, (2021) Vol. 9 No. 1 pp. 81_108, 2021.

[31] Delponte Laura, ‘European Artificial Intelligence (AI) Leadership, the Path for an Integrated Vision’ (2018) < http://www.europarl.europa.eu/supporting-analyses > Accessed 20 September 2023.

[32] Ibid.

About the Author:

Morara Lewis is a Lawyer currently engaged at the prestigious law firm of Allamano & Associates with a keen specialisation in amongst many other practices the emerging technologies and artificial intelligence (AI). Combining his passion for law and interest in AI, he has embarked on a journey to explore the profound impact of AI on legal systems and society.