Artificial Intelligence, Generative AI, and Ethics: An Educational Perspective

February
2024
Laney College
Laney College
ASCCC Area B Representative

The following article is not an official statement of the Academic Senate for California Community Colleges. The article is intended to engender discussion and consideration by local colleges but should not be seen as the endorsement of any position or practice by the ASCCC.

Since late 2022, Artificial Intelligence (AI) technology and applications have been among the most discussed topics in education, public media, and private conversations. Interest in and concerns about using AI products in classroom settings are certain to increase in the coming years. One of the twelve actions in the California Community Colleges Chancellor’s Office (CCCCO) Vision 2030 calls for “Actively engaging with the impacts of generative AI on future teaching and learning” (CCCCO, 2023). The Academic Senate for California Community Colleges, the Faculty Association of California Community Colleges, and the CCCCO have combined to present a series of free zoom webinars to help colleges learn about defining and using  generative AI (GenAI).

College students are already experimenting with generative AI products such as ChatGPT, Gemini, and DALL-E, with or without clear local classroom policies around the use of AI products.  As colleges navigate this rapidly evolving technology, clearly defining AI and GenAI and examining ethical concerns around their use in education will help further meaningful discussions of AI use and applications in the community college system.

What is AI?

Broadly, AI is the field of technologies that try to mimic human behaviors and intelligence through machines, computer programs, or systems. The following image offers further AI details (Artificial Intelligence Data Analytics, n.d.):

Image graphic showing types of Artificial Intelligence. Image quality is very low. Full text is included below.

(Weak artificial intelligence (or Narrow Al, NAI) focuses on mimicking how humans perform basic actions of remembering and perceiving things and solving simple problems.
Artificial General Intelligence (AGI) aims to create intelligent machines indistinguishable from the human mind. If realized, an AGI machine would acquire an intelligence equal to humans.
Artificial Super-Intelligence (ASI) refers to "smart" system with intelligence surpassing that of humans. To date, super Al is still entirely theoretical with active research.
Machine Learning (ML) is a branch of artificial intelligence (Al) that uses data and algorithms to imitate ways humans learn, gradually improving the accuracy of the output or result.
Classic, or non-deep, ML depends on human intervention to learn. Human experts use a set of features to understand the differences between data Inputs, usually requiring more structured data to learn.
ML, Deep Learning (DL), and neural networks (NNs) are all sub-fields of Al. The neural networks field is a sub-field of ML, and DL is a sub-field of neural networks.
Unlike ML, DL does not require human intervention to process data and allows us to scale ML more productively with larger datasets. )

Recent reports have shown that AI technologies have been employed in numerous industry sectors such as high-tech, banking, health, education, and wildfire fighting. The graphs below show the progression and development of AI, which started in the 1950s:

Artificial Intelligence Development History Timeline graphic image showing eleven time points.

(Info Diagram, n.d.)

AI Timeline highlights: 1950 Turning Test by Alan Turning, 1956 term AI was coined, 1964 pioneering chatbot named ELIZA was developed at MIT, 1971-1990 AI Winter, 1997 IB’Ms Deep Blue defeated Garry Kasparov in chess competition, 2009 Google built first self driving car for urban conditions, 2011 Apple's SIRI and IBM's Watson were developed, 2020-2021 many AI products developed.

(Gupta, 2021)

DeepLearning.AI offers free Coursera courses to learn more about AI. One of these courses, “AI For Everyone” (Ng, n.d.), has a unit called “AI and Society” which explains both the possibilities and the current limitations of AI. While AI holds many opportunities due to sophisticated mathematics, computer algorithms, and coding techniques, bias can be introduced into current AI applications that use text files on the internet as well as datasets with limited data. Thus, issues of AI biases and ethics have been raised by scientists (Allyn, 2023) and others, especially around GenAI models or products such as ChatGPT, Google Gemini, or Anthropic Claude.

What are GenAI Products?

GenAI describes algorithms that can create new content, including audio, code, images, text, simulations, and videos. A few known GenAI products are ChatGPT, Gemini, Claude-2, Midjourney for illustrations, and Stability AI  for multimedia. Claims have arisen of bias in GenAI tools, as they may use biased datasets and skewed algorithms. In addition, disputes with the use of some GenAI models have emerged in recent legal cases involving Microsoft/OpenAI, Google, and Anthropic, stating copyrighted materials were used in AI models without proper permissions or compensation. [1] These legal claims and issues provide educators with samples of potential GenAI misuses to be cognizant of when teaching or using AI in the classroom.

Ethics and AI

In general, “Ethics consists of the standards of behavior to which we hold ourselves in our personal and professional lives” (Byars & Stanberry, 2018). Ethical behaviors are needed to drive business and work professionalism. AI technologies have made people more aware of ethics in business, prompting ethics and AI litigations.

According to an early publication by Daly, et.al. (2019), “AI ethics serves for the self-reflection of computer and engineering sciences, which are engaged in the research and development of AI or machine learning.” Those who follow the current advanced development of AI may be shocked by the availability of AI tools to build exciting and better products, which may generate negative impacts on individuals as seen in recent labor strikes and lawsuits such as the one involving the Screen Actors Guild (SAG-AFTRA, 2023).

Since 2020, multiple governments and organizations have issued guidance and consideration of AI development, including UNESCO, the World Health Organization, the European Parliamentary Research Service, the U. S. White House, and California Governor Gavin Newsom, and terms have been coined such as Responsible AI, Ethical AI, AI for Everyone, and AI Essentials to address societal concerns. Options exist to mitigate ethical issues and leave them to the proper agencies.

Generative AI and Academia

Since the early 2022 introduction of ChatGPT, GenAI has been part of students’ experience with or without the approval of educators. Recent articles discussing GenAI tools have resulted in different views on this subject. Reich (2023) states, “The quality of your writing is not just a measure of your ability to communicate; it is a measure of your ability to think.” Manning (2023) counters this view by stating, “A generative language model can suggest better wordings or hip, catchy phrases. Given one sample paragraph, it can generate 10 other possibilities, which a person might mine the best parts from, or just use them all to provide a variety of messages.” However, a noteworthy perspective in the University of California Berkeley’s highly regarded GenAI blog states, “Generative AI is not a mere technological trend; it's a paradigm shift.” (BerkleyExecEd, n.d.)

Several universities and colleges have issued statements on how to incorporate ChatGPT and other GenAI tools into teaching and learning systems. Some are working behind the scenes to craft system-wide usage policies. Chapman University (n.d.) has compiled a list of various universities’ policies on AI. While academia is progressing on AI policy creation, students are already creatively and increasingly using these GenAI tools in their school work and extra-curricular activities. Many students view learning as taking many forms to achieve their immediate goals of finishing challenging assignments. Of course, academia and educators would emphasize analytical and practical understanding, as well as proper citation and attribution of ideas and wording taken from other sources, in addition to merely finishing or submitting the required assignments, as stated in many course student learning outcomes.

Students entering the workforce are also being expected to understand AI and know how to use the technology. Educators may wish to consider how to engage students in AI-generated material to sharpen their critical thinking and other industry-needed skills. A paradigm shift in teaching and institutions may have to be discussed and designed for. No perfect solutions have been developed, but educators need to “continue learning about the impact of ChatGPT and new AI tools and academic integrity” (The Wiley Network, 2023). Colleges can explore various resources to continue discussing how students, faculty, support staff, administrators, and researchers can best leverage the latest iterations of GenAI to support student success.

AI and the California Community Colleges

Currently, several California community colleges are developing AI courses, and eventually certificates and degrees will provide training and options for the future workforce and general education areas. Artificial Intelligence is one of the newly proposed disciplines currently being reviewed through the ASCCC Disciplines List process. Students earning these AI degrees and certificates can potentially immediately enter the AI industry workforce with skills such as data cleanup on datasets used to train GenAI as well as application testing or developing. Thus, colleges can train students to do valuable work of shoring up ethical guardrails for GenAI development and use. With educators’ perspectives and expectations widespread and evolving, students must continuously experiment with and apply new AI tools. Colleges should collaborate now in AI and GenAI curriculum development to ensure consistent and strategic AI course numbering and titling.

California Governor Gavin Newsom notes, “California is leading the world in GenAI innovation and research, and is home to 35 of the world’s top 50 Artificial Intelligence (‘Al’) companies” (Newsom, 2023). GenAI has started shifting the educational paradigm and allowing students to learn with different creative paths and opportunities. Educators need to explore new perspectives on the teaching goals and learning options to offer students and perhaps should take note of student learning capacity and tool access. The California Community Colleges system will need to prepare students to know how to use AI effectively and responsibly and thus may need to embrace and support AI technologies and AI career pathways.

References

Allyn, B. (2023, May 28). “The godfather of AI” sounds alarm about potential dangers of AI. NPR.  

Artificial Intelligence Data Analytics. (n.d.). AI and Data Analytics Skills are in Demand. Bay Area Community College Consortium and California Community Colleges.

BerkleyExecEd. (n.d.). Artificial Imagination: The Rise of Generative AI.

Byars, S. & Stanberry K. (2018). Business Ethics.

California Community Colleges Chancellor’s Office. (2023). Vision 2030.

Chapman University. (n.d.). Artificial Intelligence in Higher Education.

Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., Wang, W., & Witteborn, S. (2019, June 28). Artificial Intelligence Governance and Ethics: Global Perspectives.

Gupta. J. (2021). AI Timeline. Machine Learning/AI/Data Science.

Info Diagram. (n.d.). Artificial Intelligence Development History Timeline. Financial Decks.

Manning, C. (2023). “The Reinvention of Work.” Generative AI: Perspectives from Stanford HAI. Stanford University.

Newsom, G. (2023, September 6). Executive Order N-12-23. CA.Gov.

Ng. A. (n.d.). AI for Everyone. DeepLearning.AI.

Reich, R. (2023). “In Education, a ‘Disaster in the Making.’” Generative AI: Perspectives from Stanford HAI. Stanford University.

SAG-AFTRA. (2023, November 8). Tentative Agreement Reached.

Wiley Network. (2023, September 18). Higher Ed’s Next Chapter, 2023-2024 Four trends reshaping the learning landscape. Wiley.


1. For more information on the legal issues involving AI, see the litigation database at https://blogs.gwu.edu/law-eti/ai-litigation-database/ and the article "Meet the Lawyer Leading the Human Resistance Against AI" at https://www.wired.com/story/matthew-butterick-ai-copyright-lawsuits-ope….