View Static Version

Generative AI at IRSC A resource for employees

Introduction

Artificial intelligence (AI) is changing the world as we know it. This is not hyperbole. The emergence and ubiquity of generative AI (GenAI) – a type of AI that creates new, human-like content based on prompts – is a transformative (and hence disruptive) technology of a magnitude similar to the printing press, the digital computer, and the internet. It is a marvel of human achievement that has been nearly a century in the making and is now impacting virtually every industry. It is changing how we work, create, and communicate.

The fast-paced nature of GenAI development and application makes it difficult to get a stable perspective on how this technology is changing our professions. Consequently, it is incumbent on us to explore how GenAI may impact our field, e.g., to find use-cases related to our workflow and expertise. In other words, each of us must take responsibility for exploring the implications of GenAI for our professions, from finding applications that make us more efficient to rethinking the underlying epistemologies and values of our disciplines.

Because GenAI is so disruptive, our immediate reaction may be to resist its impact on traditional approaches to teaching and scholarship. While such a reaction is understandable, it is not sustainable: GenAI is being integrated rapidly into virtually every area of modern life, and this means that educators have a responsibility to teach the AI literacies that will help college graduates succeed in the workforce, advanced academic programs, and the social sphere. If we don’t teach students how to use GenAI ethically and effectively, we encourage them – at least implicitly – to use it illicitly.

This resource is designed to help higher education professionals adjust to the changing landscape and utilize GenAI in their work. We encourage you to jump in and explore the ideas and applications. Don’t wait.

I. Understanding the Landscape

The term “Artificial Intelligence” refers to the broad field of computer science dedicated to creating systems capable of performing tasks that typically require human intelligence. Before the advent of AI, early computational efforts were focused on creating algorithms and machines that could perform specific tasks with precision, such as calculating complex mathematical problems or processing large volumes of data. AI proper emerged in the mid-20th century, primarily driven by the aspiration to create machines that could mimic human intelligence.

Generative AI (GenAI) is a specialized area within AI focused on creating new, human-like content in response to queries. This includes models capable of generating text, images, music, and other data types. At the core of GenAI are foundation models, which are large-scale, pre-trained models that serve as a base for various AI applications. These foundation models are trained on vast amounts of diverse data and can be fine-tuned for specific purposes.

Large Language Models (LLMs) are a type of foundation model specifically designed for natural language processing (NLP). They are characterized by their ability to generate coherent and contextually relevant text. Notable examples of LLMs include GPT-4 and LLaMA, which are based on transformer architectures (see below). These models leverage their extensive training on diverse text corpora to produce human-like text and are widely used in applications requiring advanced language understanding and generation.

Transformer architecture is a foundational model in AI that has revolutionized natural language processing and generation. Unlike traditional models that process language sequentially, transformers utilize a mechanism called "attention," which allows them to weigh the importance of different words in a sentence, regardless of their position. This enables transformers to capture complex relationships and context more effectively. The architecture consists of layers of encoders and decoders, with encoders processing input data and decoders generating output data. This design allows transformers to handle large datasets and perform tasks such as translation, summarization, and text generation with remarkable accuracy and efficiency, making them integral to the development of advanced generative AI systems.

The video below covers the basics for understanding the differences between Machine Learning and Deep Learning and their fundamentals.

AI, Machine Learning, and Generative AI (GenAI): The Differences

To understand the place of Generative AI (GenAI) within the broader AI ecosystem, it's important to grasp the hierarchical relationship among AI, Machine Learning (ML), Deep Learning (DL), and GenAI.

As noted in the video above, AI is the broadest category, encompassing any machine capable of performing tasks that typically require human intelligence. This includes everything from simple rule-based systems to complex systems capable of learning and adapting.

Machine Learning is a subset of AI that involves the development of algorithms that allow computers to learn from and make predictions or decisions based on data. Traditional ML algorithms include decision trees, support vector machines, and k-nearest neighbors.

Deep Learning is a specialized subset of Machine Learning that utilizes neural networks with many layers (hence 'deep') to analyze large amounts of data. This approach is particularly effective for tasks such as image and speech recognition. Unlike traditional ML algorithms, DL models can automatically discover the representations needed for feature detection or classification directly from raw data. While traditional ML often relies on human-engineered features and can use both supervised (labeled data) and unsupervised (unlabeled data) learning, DL excels in handling raw data without requiring extensive manual feature engineering, making it highly versatile and powerful for complex tasks.

Created by Michael Pelitera, Institute for Academic Excellence

GenAI, which uses DL, focuses on creating entirely new data that resembles the data it was trained on. It relies primarily on advanced techniques like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). In GANs, two models, a generator and a discriminator, are engaged in a competitive process. The generator creates new data, while the discriminator evaluates it to determine how realistic it is. This back-and-forth process improves the quality of the generated data over time. Similarly, VAEs learn to encode input data into a compact representation and then decode it back to generate new data that is similar to the original. GenAI can produce new content, such as images, videos, text, or music, and is judged based on how realistic, coherent, and diverse the generated content is.

In sum (whew!), AI encompasses a broad field focused on creating machines capable of performing tasks that require human-like intelligence. Within AI, Machine Learning is a key subset that involves training algorithms to learn from data, employing techniques such as supervised learning with labeled data and unsupervised learning with unlabeled data. Deep Learning, a more advanced form of Machine Learning, utilizes neural networks with many layers to model intricate patterns. Generative AI, a type of Deep Learning, specializes in creating new, realistic data based on its training. A significant breakthrough in generative AI is the transformer architecture, which leverages a mechanism called 'attention' to evaluate the importance of different parts of the input data. This allows transformers to grasp and generate complex language and other forms of data effectively. The attention mechanism enables transformers to capture detailed relationships and context, making them highly efficient for tasks like text generation and translation.

The History of Artificial Intelligence

History is the purposeful explanation of how we got to the present moment, which can be helpful for understanding the impact and implications of technology (as well as people and events). The history of AI is no different, as it situates AI in the broader social and intellectual context of model life. But that history is extraordinarily complex, drawing on fields as diverse as mathematics, linguistics, economics, and psychology (just to name a few).

Rather than attempting a comprehensive account of the history of AI here, we recommend you follow the link below to a history and timeline of AI that provides an overview of the key events and developments in artificial intelligence (from ancient Greece to the present). The article also discusses various milestones, including the creation of early automata, the introduction of programmable computers, and the rise of modern AI applications like virtual assistants and autonomous vehicles. The future of AI and its potential impacts on various industries are also explored.

Implications for Economic Growth, Productivity, and the Workforce

According to the report “The Potentially Large Effects of Artificial Intelligence on Economic Growth” by Briggs and Kodnani (Goldman Sachs Research), AI-driven growth can occur through partial automation of tasks and the creation of new tasks and industries. This could lead to increased economic output and growth across various sectors globally.

Barclays Research and IBM Institute for Business Value have studied the impact of AI on productivity. The results are optimistic regarding GenAI’s potential to increase growth. GenAI technology is accessible to a wide audience allowing users to issue instructions without needing to learn special programming languages. These tools are applicable and usable across many disciplines and sectors.

The image:“Most critical skills required of the workforce” shows a comparison of essential workforce skills across 2016, 2018, and 2023. Proficiency in STEM maintained a consistent 42% importance throughout the years. AI is not replacing these critical skills but rather augmenting and reshaping how they are applied in the workforce.
The image: “The effects of technological advances on labor productivity” illustrates the impact of major technological innovations on labor productivity from 1760 to 2020. The chart features three significant technological eras: the Steam Engine Technology era, the Electrification era, and the Information and Communication Technology (ICT) era. The graph displays year-over-year percentage changes in labor productivity, showing both smoothed and actual output per hour data, with a smoothed series represented by a 9-year centered moving average. Data sources include Kendrick (1961), Syverson (2013), BLS, BoE, and Barclays Research.

The potential impact of GenAI on the workforce will depend largely on the identification of use-cases in specific industries, employee training, and access to the most powerful AI tools. A lot of energy is currently being devoted to understanding the implications of GenAI for business, and no doubt there are some exaggerated hopes (and fears) about how this technology will shape our future. However, according to two recent reports, the impact is going to be substantial.

Goldman Sachs, in its June 2024 report “Gen AI: Too much spend, too little benefit?” , suggests that GenAI is expected to significantly boost US labor productivity and GDP growth, potentially automating 25% of all work tasks and raising US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. And in a January 2024 report by Cognizant (in collaboration with Oxford Economics), the authors remark:

What we learned has profound implications for workers—and for the future of work. Our model shows that in the next 10 years, most jobs (90%) could be disrupted in some way by generative AI. Everyone from administrative assistants to CXOs will be affected.” (Cognizant, “New Work, New World”)

To add to this emerging gestalt, NocodeAI has produced an inventory of possible use cases for GenAI across industries. You can find these use-cases by clicking on the button below:

GenAI Use Cases by Industry, Nocode.ai: https://www.nocode.ai/use-cases-by-industry/

Despite the variability and uncertainty of predictions about the impact of GenAI on various industries, it is reasonable to conclude that AI technologies will have an historic impact on the way we work, learn, and live. In practice, this means we need to think deeply about the types of skills needed to use AI effectively, or what we refer to as “AI literacies.” It should go without saying that the concept of AI literacy we develop today will need to be adapted as the technology changes and new applications emerge.

Drawing on the work of the Florida AI Learning Consortium (FALCON), we have developed a framework for AI literacy that articulates the skills and knowledge needed to apply AI and GenAI tools responsibly and creatively. This framework is a starting point for discussion, not a recipe to be followed. It reflects the literacies we need as higher education professionals as well as the skill our students will need when they graduate.

II. AI-Literacy

The term“AI literacy” refers to the knowledge and skills required to understand and apply AI technologies responsibly and effectively. As AI is a broad field, however, it is helpful to distinguish AI literacy from GenAI literacy in many contexts, as GenAI has a unique set of properties.

Based on current research into AI literacy frameworks, we divide AI literacy into four competencies which can then be narrowed to GenAI literacy:

  1. Knowledge: The understanding of fundamental AI concepts, terminology, and principles, including the ability to identify AI systems and evaluate their use and implications in broader historical and social contexts.
  2. Skills: The ability to engage and apply AI tools proficiently in different settings and to demonstrate competence in AI-supported tasks. This includes the ability to identify domain-specific applications and to use AI in support of innovation.
  3. Ethics: The evaluation of AI-related practices and behaviors for their moral implications, and the corresponding commitment to use AI technologies in the defense or advancement of human integrity, interests, and values.
  4. Integral Reasoning: The holistic, critical analysis and evaluation of AI technologies, their applications, and their limitations, as informed by 1-3 above and domain expertise.

Applying and Adapting AI Literacy

The preceding competencies are written in general terms so they can be applied and adapted to different domains. For example, “GenAI literacy” can be understood broadly by substituting “GenAI” and “GenAI technologies” for “AI” and “AI technologies,” respectively, in 1-4 above. One can then adapt the competencies by identifying more specific outcomes under each competency (e.g., to specify what college faculty should know and be able to do, or to specify what different student populations should know and be able to do).

Here are the current GenAI literacy outcomes that college employees should master:

GenAI: Knowledge-Related Outcomes
  • Articulate the historical development of AI and its major milestones
  • Define and explain key terms such as Machine Learning (ML), Deep Learning (DL), Artificial Neural Networks (ANNs), and Generative Artificial Intelligence (GenAI)
  • Explain the concept and source of AI bias and techniques for mitigation
  • Define AI interpretability and explain its importance
GenAI: Skills-Related Outcomes
  • Employ GenAI tools for content creation (e.g., images or text)
  • Craft effective prompts to generate desired outputs
  • Utilize meta-prompts (e.g., specifying tone, format, style, etc., of the desired outputs) to improve GenAI responses
  • Engage GenAI platforms iteratively to refine system outputs
  • Evaluate and select appropriate GenAI platforms for specific purposes
  • Integrate GenAI into existing workflows
  • Identify new use cases for GenAI application
  • Collaborate effectively with GenAI systems to enhance productivity
GenAI: Ethics-Related Outcomes
  • Adopt the principle of “Do no harm” in GenAI use cases
  • Evaluate GenAI platforms for fairness of outcomes that represent, influence, or impact different demographics
  • Mitigate bias in GenAI outputs
  • Identify potential privacy and security concerns in GenAI data collection and usage, and take steps to address them proactively
  • Articulate the moral implications of using GenAI in decision-making that affects others, and take the appropriate steps to mitigate negative outcomes
GenAI: Integral Reasoning-Related Outcomes
  • Critique GenAI-created output for accuracy, clarity, precision, and relevance, utilizing domain knowledge and GenAI where appropriate
  • Assess the appropriateness of using GenAI for specific tasks or decision-making processes, based on knowledge of GenAI's attributes and limitations
  • Compare and contrast the performance of different GenAI platforms for a given use case
  • Identify potential limitations or failure modes of GenAI systems in various contexts
  • Analyze the quality and representativeness of training data used in AI models
  • Critically examine GenAI-driven predictions or forecasts based on underlying assumptions and data
  • Evaluate and adjust AI literacy, and the associated outcomes, to suit the needs of different industries and purposes
Additional Considerations

AI literacy should not be thought of as a fixed framework or an algorithmic method for evaluating a person's competence with AI technologies. Rather, it provides a context or lens through which to view the complex engagement with AI. The essential knowledge and skills considered above will continue to evolve as the interactions between humans and technology open up new and unexpected applications, limitations, and opportunities.

Use the elements of AI and GenAI literacies considered here as heuristics, i.e., rules of thumb that can be adapted to different domains for different purposes. The competencies and outcomes above will be weighted differently for workforce readiness than for advanced academic programs; they will differ by industry and endeavor and for faculty versus students. This is why an essential component of this AI literacy framework is the ability to adapt and adjust competencies and outcomes to the needs of different professional purposes and industries.

III. Platforms and Effective Prompting

Choosing a GenAI Platform

When choosing a generative AI (GenAI) platform, consider a few critical aspects:

  • Capabilities: Ensure the platform supports the types of AI models you need, like text, image, or audio generation.
  • Ease of Use: Look for user-friendly interfaces and good documentation to facilitate integration and use.
  • Scalability: Consider whether the platform can scale as your needs grow.
  • Cost: Evaluate pricing models to fit your budget.
  • Community and Support: A strong community and responsive support can be invaluable for troubleshooting and improvement.
  • Compliance and Ethics: Ensure the platform adheres to relevant data privacy laws and ethical guidelines.
Platform Examples

There are thousands of GenAI platforms now available, many of them based on popular large language models such as ChatGPT4o. The list below is only a sampling. We recommend trying out more than one (all have a free version) and seeing how each meets your professional or personal needs.

Microsoft's Copilot is a GenAI platform that integrates seamlessly with other Microsoft products. It has access to the internet and offers citations with its output.

ChatGPT: ChatGPT is the creation of OpenAI and was introduced to the public on November 30, 2022. It is a versatile tool for natural language generation. The latest free version (GPT4o) allows you to upload documents such as PDFs and Excel spreadsheets.

Adobe Firefly is an image-to-text generator. It is carefully constrained to avoid generating offensive content and has a "creative" feel to the images it produces, and it is integrated into platforms such as Adobe Express.

Claude 3.5 Sonnet: is the creation of Anthropic, and it excels at natural language generation. The paid version allows you to upload documents and work with Claude on real-time editing.

Gemini is Google's latest GenAI bot, which is described as "multi-modal," meaning it can work with different data forms (graphs, spreadsheets, natural language, etc.).

ResearchRabbit is a "citation-based literature mapping tool," which is specifically designed to support academic research. One starts by entering a "seed paper" or topic, and ResearchRabbit will find publications relevant to that paper or topic.

Grammarly is a GenAI platform that is specifically focused on providing writing assistance in a data-secure environment. From suggesting changes to tone and grammar to summarizing extended text, Grammarly is a healthy step into GenAI-augmented writing.

Perplexity is a GenAI platform that offers real-time information retrieval, different search modes (Quick Search and Pro Search), integration with multiple AI models, and citations, and emphasizes accuracy over creativity (to reduce hallucinations). It also allows you to upload attachments.

How to Write Effective Prompts

Crafting effective prompts is crucial for getting the most out of GenAI models. Prompting is an art, and practice is essential. In general, though, one should structure prompts around the following:

Task:

Be clear and specific about what you want the GenAI to achieve. Do you want it to write a poem, translate a document, answer a question, or generate creative text? The clearer the task, the better the model can tailor its response.

Format:

Specify the desired format of the output. Is it a list, an email, a news article, or a script? Providing a format helps the GenAI structure its response appropriately.

Tone & Voice:

Indicate the desired tone and voice for the output. Should it be formal, informal, humorous, serious, persuasive, etc.? Specifying this guides the model's language style

Context:

Provide relevant background information or context for the task. This could include target audience, specific details, or references. The more context you provide, the more nuanced and relevant the GenAI's output can be.

Examples:

Providing examples of your desired output or other relevant information in a prompt can be helpful. Just as examples can be useful for deepening human understanding, providing illustrations in a prompt can help the GenAI platform produce what you are looking for.

Iterative Engagement:

Prompting typically requires what we call“iterative engagement,” i.e., the repeated revision of prompts and follow-up queries to get the results we want. If you don't succeed with your first prompt, try again. Change up the words. Offer clarifications. Provide examples.

Learning how to prompt specific platforms takes time and practice. Be patient. Be creative.

Prompting Illustrations

The Science Report

Task: Summarize the key findings of a scientific study.

Format: Scientific report with introduction, methodology, results, and conclusion sections.

Tone & Voice: Objective and unbiased scientific language.

Context: Provide the title, abstract, and keywords of the scientific paper you want summarized.

Examples: Last year's report that I produced without the assistance of GenAI, saved as a PDF on my laptop.

Prompt: “Summarize the key findings of the scientific paper titled [insert paper title] focusing on the methodology, results, and conclusion sections. Use objective and unbiased language appropriate for a scientific report, and produce it in style of last year's report titled "[Insert the title of the original report].”

Creative Writing

Task: Write a short, fictional story for young adults. It should be approximately 500 words, and its main characters should be a young wizard named Aric and a talking cat named Luna. The settings should be a magical forest and an ancient castle. The plot should be about Aric finding a hidden key to save his village from a dark curse.

Format: The story should have a clear beginning, middle, and end, and it should include paragraphs that include dialogue between the characters.

Tone & Voice: Whimsical and adventurous, written in a third-person narrative.

Context: The story takes place in a medieval-inspired fantasy world where magic is common. The target audience is young adults.

Examples: Style the story similar to those of J.K. Rowling or C.S. Lewis.

Prompt: “Write a short, fictional story in the fantasy genre, approximately 500 words long, aimed at young adults. The story should feature a young wizard named Aric and a talking cat named Luna. It should be set in a magical forest and an ancient castle. The plot should revolve around Aric's quest to find a hidden key to save his village from a dark curse. The story should have a clear beginning, middle, and end, and be written in the third-person narrative with a whimsical and adventurous tone. Use paragraphs and include dialogue between the characters. The style should be similar to the works of J.K. Rowling or C.S. Lewis, set in a medieval-inspired fantasy world where magic is common.”

Finding Connections

Task: To explore the possible connections between empirically validated psychological theories and the topic of a professional development workshop on communication. The theories should be related to communication, learning, and interpersonal relationships.

Format: A list of theories, each followed by a summary of its key principles, empirical evidence, and practical applications. Also include a list of peer-reviewed articles and books that support each of the theories.

Tone & Voice: The tone should be professional and academic, but accessible to a broad, educated audience.

Context: The workshop is designed for college employees interested in helping students develop better communication skills. The objective is to equip participants with the knowledge and practical strategies to support student communication.

Examples: An example could include a summary of a social learning theory with references drawn from the internet or sent by a psychology faculty member.

Prompt: “Identify relevant, empirically validated psychological theories that can inform a workshop for college faculty on helping students communicate more effectively. Focus on theories related to communication, learning, and interpersonal relationships. Provide a list of theories, each followed by a summary of its key principles, empirical evidence, and practical applications. The summaries should be written in a professional and academic tone, but accessible for educators, using the third-person informative voice. Use bullet points for clarity and include references to empirical studies. The workshop is designed to equip college employees with knowledge and practical strategies to support student communication. Examples of the desired outputs include summaries of Social Learning Theory, Self-Determination Theory, and Communication Accommodation Theory (attached).”

GenAI Use-Cases to Get Started

Analyzing Spreadsheets:

Identifying Trends and Patterns: GenAI can analyze large datasets in spreadsheets to identify trends and patterns that might be difficult for humans to see. For example, you could use GenAI to analyze sales data in a spreadsheet and identify which products are selling well in different regions or which marketing campaigns are most effective.

Analyzing Data: Imagine you have a spreadsheet with sales data for different products across various cities. GenAI can analyze this data and tell you:

  • Cities where a specific product has the highest sales.
  • Products that consistently underperform across all cities.
  • Potential correlations between product categories and city demographics.

Data Cleaning and Outlier Detection: GenAI can help clean messy spreadsheets by identifying missing data, inconsistencies, and formatting errors. It can also detect outliers that might skew your analysis.

Predictive Analytics: Some advanced GenAI tools can be used for predictive analytics. By analyzing historical data in spreadsheets, they can generate forecasts for future trends.

Creating Charts and Graphs

You have a spreadsheet with customer satisfaction ratings for different products. GenAI can create a bar chart showing the average rating for each product, allowing you to see which products receive the highest customer satisfaction.

GenAI can be used to create interactive dashboards that allow you to explore your data in more detail. These dashboards can include charts, graphs, and other visual elements that update dynamically as you interact with the data.

Suggestion: Use the prompting strategies in the section above to work with ChatGPT4o to produce instructions for building an interactive dashboard.

Researching and Writing a Grant Proposal with ChatGPT-4o

Writing a grant proposal in higher education is a multifaceted task that involves research, careful planning, and articulate writing. ChatGPT-4o can be an invaluable tool throughout this process, helping you brainstorm ideas, locate relevant information, structure your proposal, and refine your writing. Here’s a step-by-step guide on how to leverage ChatGPT-4o for this purpose:

Steps to Research and Write a Grant Proposal using ChatGPT-4o

1. Identifying Grant Opportunities:

  • Use ChatGPT-4o to locate potential grant opportunities relevant to your project.
  • Ask for summaries of grant requirements, deadlines, and eligibility criteria.

2. Brainstorming and Concept Development:

  • Generate ideas for your project, including objectives, methodologies, and outcomes.
  • Use ChatGPT-4o to refine these ideas and ensure they align with the grant’s goals.

3. Literature Review and Background Research:

  • Obtain summaries of relevant literature and research findings to support your proposal.
  • Ask ChatGPT-4o for insights on how to frame the background section of your proposal.

4. Writing the Proposal:

  • Develop a structured outline for your proposal.
  • Use ChatGPT-4o to draft each section, including the introduction, objectives, methodology, budget, and evaluation plan.

5. Refining and Editing:

  • Seek feedback from ChatGPT-4o on clarity, coherence, and persuasiveness.
  • Get assistance with editing for grammar, style, and adherence to guidelines.
Detailed Illustration

I. Identifying Grant Opportunities:

Initial Prompt (getting us in the ballpark): “Help me find grant opportunities for a higher education project focused on improving student engagement through innovative teaching methods. Summarize the requirements, deadlines, and eligibility criteria for each.”

ChatGPT-4o Response: "Here are some potential grant opportunities:

1. Title: NSF Improving Undergraduate STEM Education (IUSE)

  • Requirements: Focus on STEM education innovations, detailed project plan, evaluation metrics.
  • Deadline: November 1, 2024
  • Eligibility: Higher education institutions in the U.S.

2. Title: NEH Digital Humanities Advancement Grants

  • Requirements: Projects that enhance digital humanities, clear project objectives, collaboration with other institutions.
  • Deadline: January 15, 2025
  • Eligibility: U.S. higher education institutions, non-profits.

3. Title: Spencer Foundation Small Research Grants

  • Requirements: Research on education practices, under $50,000 budget, project timeline.
  • Deadline: March 1, 2025
  • Eligibility: U.S. and international higher education institutions.
The iterative engagement with GPT-4o would continue...

II. Brainstorming and Concept Development:

Additional (n-th) Prompt: “I need ideas for a project to improve student engagement in higher education using innovative teaching methods. Can you suggest some objectives, methodologies, and expected outcomes?”

ChatGPT-4o Response:

Project Ideas:

1. Objective: Enhance student engagement through flipped classroom models.

  • Methodology: Implement pre-recorded lectures, interactive in-class activities, peer collaboration.
  • Expected Outcomes: Increased student participation, improved academic performance, higher satisfaction.

Observations:

The engagement with ChatGPT-4o would continue through the process of refining the key ideas, connecting them to the right funding opportunity, performing a literature review, and writing and editing the proposal. In addition, you might employ other platforms, such as ResearchRabbit, to deepen your understanding of the academic landscape, check ChatGPT-4o's output, and refine your ideas.

By following these steps and leveraging ChatGPT-4o for each phase of the grant proposal process, you can efficiently gather information, develop a compelling narrative, and refine your writing to increase the likelihood of securing funding.

Be sure that the funding agency does not prohibit the use of GenAI in this manner, and disclose your use of platforms such as ChatGPT-4o for those that allow it.

Other Use-Cases

Automated Onboarding and Training:

Use GenAI to create personalized learning and onboarding programs for new hires. GenAI can generate customized content based on the role and previous knowledge level of the new employee, thereby improving the effectiveness of training programs.

Content Creation and Updating:

GenAI can assist with the discovery of new connections among ideas, events, and data, thereby assisting in knowledge creation. GenAI can also be used to create marketing materials, social media posts, and even creative text formats like presentations or scripts. This can help businesses save time and resources while still producing high-quality content.

Information Retrieval:

GenAI models can be trained to search through vast amounts of data to find relevant information upon request. This can include searching through emails, documents, databases, and more, delivering precise information that matches the query.

Question Answering Systems:

Implement AI-driven Q&A systems that can understand and respond to employee queries in natural language. This can help in quickly providing answers to common questions, reducing the workload on human support staff and speeding up information retrieval.

Locate Experts:

GenAI systems can analyze the skills and work history of employees to identify subject matter experts within an organization. This helps in effectively routing queries and tasks to the most appropriate individuals.

Semantic Search and Clustering:

Enhance search systems with AI to perform semantic search, which understands the intent and contextual meaning of a query, rather than relying solely on keywords. Additionally, AI can cluster related documents and topics automatically, improving the organization of information.

Enhance Creativity:

Music Composition: GenAI can compose music, helping musicians explore new styles, generate background scores, or even create sound effects.

Product Design: GenAI can be used to generate new product ideas and variations, assisting designers in the brainstorming and exploration phase.

Visual Art Creation: GenAI can create paintings, illustrations, and graphics, allowing artists to explore new directions and expedite the creative process.

Software Development:

Code Generation: GenAI can assist programmers by automatically generating code snippets based on natural language instructions.

Bug Detection: GenAI can be used to identify and fix bugs in software code.

And Many More...

IV. Evaluating GenAI Output

GenAI platforms can hallucinate. That is, they can provide output that is presented as true but which is actually false -- an inaccurate extrapolation from its training data in response to a prompt. From one perspective, this tendency is a good thing, at least for certain types of tasks: If one is using GenAI to brainstorm or write a fictional story, then the tendency to hallucinate is analogous to the human imagination. But in the context of finding facts, the same tendency is detrimental.

For this reason, an essential skill in using GenAI is the ability to effectively evaluate the output. This is a complex process, depending on what one is doing, and although a lot of effort is being put into preventing GenAI platforms from making mistakes, that does not relieve us of the responsibility to check the output. The time and energy one invests in evaluating output will depend on one's purpose: Using GenAI to produce legal briefs and review legal precedent would demand a rigorous evaluation (at least if one wants to keep one's job); a casual review of a topic for personal interest might not warrant as intensive an investigation.

Here are some helpful guidelines one can use in evaluating GenAI content.

Use GenAI to check GenAI

It can be helpful to use one platform to check the output of another. This can be done by feeding the output from one system into another and prompting it to check for factual inaccuracies, or it can be done by prompting multiple platforms with the same information and comparing the outputs for deviation. Of course, one can also check the output against reputable sources of information, such as online encyclopedias, peer-reviewed articles in reputable journals, etc.

Other Criteria for Evaluating GenAI Output

Accuracy and factual content: Fact-check.Verify the information provided by the GenAI against reliable sources. This is crucial, especially for factual topics like history, science, or current events.

Readability and coherence: The generated text should be clear, easy to understand, and logically structured.

Originality and creativity: Depending on the purpose, you might assess the creativity or originality of the content.

Relevance and focus:

Assess the fit with the prompt: See if the GenAI output aligns with the original instructions or prompt you provided.

Check for tangents or irrelevant information: Ensure the content stays focused on the topic and avoids spinning off into unrelated details.

Here are some additional tips:

Use multiple prompts: Try different prompts or rephrase your instructions to see how the GenAI output changes. This can reveal the model's flexibility and understanding of the task.

Consider the limitations of GenAI: These models are still under development, and their outputs can be imperfect. Be prepared to identify and correct errors or shortcomings.

Identifying Bias

What we mean by “bias”

There is a lot of talk about bias these days, and a fair amount of confusion.“Bias” refers to a distortion in our thinking, a distortion that is often self-serving and comes at the expense of others. Hence, bias conflicts with objective, fair, and conscientious thinking (what we might refer to as “rational thinking”).

We should distinguish between being biased and acting or speaking with bias. To say someone is biased is to suggest the person has an enduring tendency to distort information. By contrast, to notice that a person is acting or speaking with bias is to suggest that a particular instance of a person's behavior is biased (rather than the person him- or herself).

This is an important difference: The first is a judgment about the character of the person; the second is an observation about a specific event. We are often justified in making the second judgment, but we are seldom justified in drawing a broad conclusion about the type of person (or the type of thinking a person exhibits) based on the information available.

It is also helpful to distinguish bias from perspective. Perspective is the vantage point from which we observe something. Someone may view an issue from the perspective of a parent, or an educator, or a citizen, or an American. Different perspectives on issues are like different angles from which we view the Empire State Building from different parts of New York City: From different angles, the building looks different -- taller, shorter, darker, lighter, rectangular, square, etc.

There is nothing morally wrong about sharing a perspective, as long as one doesn't assume it's the only point of view. There is something morally wrong with biased behavior or speech, as this is a distortion of reality that negatively impacts others.

So we need to be clear on the difference between being biased and acting or speaking with bias, and we also need to be clear on the difference between sharing a perspective and distorting the facts.

Bias and GenAI

Like it or not, everyone acts with bias at one time or another. Moreover, all human beings are predisposed -- biologically -- to distort information in certain ways. Doing so is a way of making us overly cautious (e.g., the tendency to overgeneralize, or distinguish an “in-group” from an “out-group”), or making sure our needs are met.

The reason for noting this point is that it is easy to fall victim to a particular kind of bias when we make judgments about bias, namely, that it's the other person that has the problem, not me (“My thinking is fair and reasonable! It's the other person that has distorted thinking.”). This is what is referred to as the Fundamental Attribution Error, which is a tendency to excuse our own behavior but condemn the behavior of others, even when the facts don't support that judgment.

What does this have to do with evaluating GenAI output? Quite a lot, actually. First, if we mistakenly believe that the goal is to avoid all bias, i.e., to be perfectly rational, objective thinkers, then we will create an unrealistic goal by which to judge ourselves and others, including GenAI.

The second reason this reflection is important is that a failure to understand bias and how it differs from perspective will translate into a distorted evaluation of GenAI output. For example, if a GenAI platform provides information that I disagree with but that is a legitimate perspective on a topic, I may be inclined to dismiss the output as biased when it isn't. That's a mistake on our part, not on the part of GenAI.

In other words, in order for us to mitigate bias in GenAI output, we need to be able to mitigate it in our own thinking, which is not an easy task.

Types of GenAI Bias

As human beings act and speak with bias, it's not surprising that GenAI, which is trained on enormous amounts of human output, picks up and occasionally amplifies bias. Here are a few of the forms of bias that GenAI is prone to:

Five Forms of GenAI Bias

Training Data Bias

  • Definition: This occurs when the data used to train a model is not representative of the broader population or context in which the AI will be applied.
  • Explanation: If the training data is skewed or incomplete, the model will learn and reproduce those biases. For instance, an AI trained on a dataset with predominantly male job applicants might favor male candidates, perpetuating gender bias.
  • Underlying Reason: AI models learn patterns from the data they are trained on. If the data reflects historical biases or is unbalanced, the model will inherently learn and apply those biases.

Algorithmic Bias

  • Definition: Bias that arises from the design or functioning of the algorithm itself, often due to the choices made by developers during the creation of the model.
  • Explanation: Certain algorithms may inadvertently weigh some features more heavily than others, leading to biased outcomes. For example, an algorithm that prioritizes recency might favor newer data, ignoring potentially valuable older information.
  • Underlying Reason: The mathematical foundations and assumptions built into algorithms can introduce bias, especially if the design does not adequately account for diverse scenarios or fairness considerations.

Confirmation Bias in GenAI Development

  • Definition: The tendency of developers to focus on information or feedback that confirms their existing beliefs about the AI system, while neglecting or undervaluing information that contradicts it.
  • Explanation: Developers might train and fine-tune models based on a subset of feedback that supports their hypotheses, leading to a biased understanding of the model's performance and limitations.
  • Underlying Reason: Human developers are prone to their own cognitive biases, which can influence how they select data, interpret results, and make decisions during the development process.

Selection Bias

  • Definition: Occurs when the data used to train an AI model is not randomly sampled, leading to overrepresentation or underrepresentation of certain groups.
  • Explanation: If the data collection process inadvertently excludes certain demographics, the AI model will be less accurate for those groups. For example, a facial recognition system trained primarily on lighter-skinned individuals may perform poorly on darker-skinned individuals.
  • Underlying Reason: Practical constraints and historical practices in data collection can result in datasets that do not fully capture the diversity of the population, leading to biased model performance.

Interaction Bias

  • Definition: Bias that emerges from the way users interact with AI systems, which can reinforce existing biases in the model.
  • Explanation: User behavior and feedback can shape the ongoing learning of AI systems. For instance, if users disproportionately click on certain types of content, the AI might learn to prioritize similar content, amplifying biases present in user behavior.
  • Underlying Reason: AI systems often adapt based on user interactions. If user inputs are biased, the AI will adjust its behavior to align with those biases, creating a feedback loop that reinforces and potentially exacerbates the biases.
Underlying Reasons for GenAI Proneness to Bias

Historical Data Bias: GenAI models are often trained on large datasets containing historical biases. These biases reflect societal prejudices and discriminatory practices embedded in the data.

Lack of Diverse Data: Many datasets lack sufficient diversity, leading to models that do not generalize well across different demographics or contexts.

Complexity of Human Language: Language models are particularly susceptible to bias because language itself is complex and laden with cultural, social, and individual biases.

Developer Bias: The biases of those who design, build, and train AI models can inadvertently influence the outcomes, perpetuating their own conscious or unconscious biases.

Feedback Loops: AI systems that learn from user interactions can develop biases based on the skewed input they receive, which can be further amplified over time.

Understanding these forms of bias is crucial for developing strategies to mitigate their impact, ensuring that AI and GenAI systems are fair, ethical, and more accurate in their applications.

Mitigating GenAI Bias

Training Data Bias

To address Training Data Bias in GenAI, it's essential to understand that the AI's knowledge and outputs are fundamentally shaped by the data on which it was trained. These data may contain historical, cultural, or demographic misinformation. When using GenAI in educational settings, be aware that it might have limited or skewed information about certain topics, cultures, or time periods.

Encourage students to question GenAI's sources and perspectives, especially when dealing with diverse or historically underrepresented subjects. Use GenAI's potential limitations as a teaching opportunity to discuss the importance of diverse data sources in technology and research. When exploring topics where bias might be particularly impactful, such as history or social sciences, supplement GenAI outputs with a variety of authoritative sources representing different viewpoints.

Consider comparing AI-generated content with current academic research or expert opinions to identify potential gaps or biases in the platform's knowledge base. By highlighting the impact of training data on AI outputs, you can foster integral thinking about information sources and the value of diverse perspectives in knowledge creation.

Algorithmic Bias

To address Algorithmic Bias, it's crucial to approach the GenAI's outputs with a critical eye, especially when dealing with sensitive topics or diverse populations.

Be aware that the AI may unintentionally perpetuate societal stereotypes or biases in its generated content. When using GenAI for educational materials or discussions, carefully review the outputs for potential biases related to race, gender, age, socioeconomic status, or cultural perspectives. Encourage students to question and analyze AI-generated content, fostering critical thinking skills.

Confirmation Bias in GenAI Development

To address Confirmation Bias in GenAI development, it's important to understand that the creators of these systems may unconsciously design or interpret GenAI outputs in ways that confirm their preexisting beliefs or hypotheses.

When using GenAI in educational settings, encourage students to approach the AI's responses with healthy skepticism, especially when dealing with complex or controversial topics. Teach them to look for alternative viewpoints or contradictory information, rather than accepting the AI's first response as definitive. Use GenAI as a starting point for discussions about the scientific method, critical thinking, and the importance of challenging one's own assumptions.

Create assignments that require students to deliberately seek out information that contradicts their initial hypotheses or GenAI's responses. By highlighting the potential for Confirmation Bias in GenAI development, you can help students become more discerning consumers of AI-generated information and better understand the human factors influencing technology development.

This awareness can also spark valuable discussions about objectivity in research and the role of diverse perspectives in creating more robust and unbiased AI systems.

Selection Bias:

To mitigate Selection Bias, it's important to understand that GenAI's knowledge may be limited or skewed based on its training data. Diversify your prompts and topics when using the tool, and don't rely solely on its outputs for comprehensive information.

Be aware that the AI might have gaps in knowledge about certain cultures, historical periods, or specialized fields. When using GenAI for educational content or research, always cross-reference its information with reputable, diverse sources. Encourage students to do the same, teaching them to view GenAI as a starting point rather than a definitive source.

By being conscious of potential data limitations and actively seeking diverse perspectives, you can help counteract the effects of Selection Bias in GenAI outputs.

Interaction Bias

To mitigate Interaction Bias, it's important to be mindful of how you phrase queries and interpret responses. Strive for objectivity in your prompts, avoiding leading questions that might steer GenAI toward biased answers. Be aware of your own biases and preconceptions that could influence how you interpret the AI's outputs. Encourage students to do the same, teaching them to craft clear, neutral questions and to evaluate GenAI's responses critically.

Regularly vary the way you ask questions about the same topic to see if you get different perspectives. Be cautious of anthropomorphizing the AI or attributing human-like qualities to it, as this can lead to misinterpretation of its capabilities or intentions.

When using GenAI for classroom activities or discussions, create opportunities for students to reflect on their interactions with the platform being used, discussing how their questions might influence the answers they receive. By fostering an awareness of Interaction Bias, you can help students become more discerning users of AI technology and better critical thinkers overall.

V. GenAI in the Classroom

Before we get into the details of whether and how to incorporate GenAI in the college classroom, let's begin with the elephant in the room:

“Isn't it cheating if students use GenAI to complete their coursework?”

By definition, to cheat means to defraud or swindle. It can also mean to violate established rules or social conventions. Certainly, student usage of GenAI is cheating if it violates an explicit prohibition in our syllabus, but that is hardly more than an arbitrary judgment without understanding why one would establish such a prohibition in the first place.

There are, of course, different and rational reasons for banning GenAI from certain instructional contexts and practices, but there is also a tendency to foreclose new opportunities because we too quickly judge what counts as cheating based on an ethical standard tied to the printed word and traditional literacy. As the technology at the heart of knowledge and pedagogy changes, so too must our views on what counts as legitimate learning. That's exactly what's happening with GenAI: It's altering our understanding of the nature of learning, verification, human-technology collaboration, and knowledge itself.

In other words, our ideas about cheating must evolve as methods for teaching and learning change under the influence of GenAI. This isn't a bad thing, unless one views change as bad. And it doesn't mean that the valuable aspects of traditional teaching and learning will be thrown out. It means that we need to be open to the possibility that GenAI will augment teaching and learning in important and unexpected ways. Hence, what we need is experimentation with GenAI, the emergence of stable practices that support learning, and an adjusted concept of academic integrity that squares with these new practices. If we prohibit GenAI use altogether, or we do so on the basis of a commitment to an increasingly outdated set of practices, then we lose out on an exciting opportunity to help change education for the better.

As teachers our job is to ensure learning, and the intersection between GenAI and learning is inevitable. Yes, if a student violates an explicit syllabus policy against using GenAI, then they are guilty of cheating. But is this judgment the result of a moral failing on the part of the student or the result of unnecessarily restrictive pedagogy? It's a fair question. The material below is intended to invite a thoughtful consideration of how GenAI might enhance and change the ways teach and assess student learning, and hence how we think about cheating in a GenAI-augmented context.

“But where do I start?”

It's difficult to know where to begin when it comes to integrating GenAI into instruction. We've been teaching effectively for a long time using tools and techniques that are well-honed and central to our craft. The idea that some of these tools and techniques -- perhaps many of them -- must be adjusted can be disheartening.

Here's a practical recommendation: Just try something. If you haven't explored the current GenAI tools, look to the list above and pick one. Spend a little time playing around, trying different prompts, and evaluating the output. Remember, we are all in this together, faculty and students. If you feel you'd like to try the technology in the classroom, talk with your students about what they know -- and just as importantly, what they don't know. Be creative.

Whether you are just starting out or are steeped in AI, here is a resource that is easy to use and designed for educators by educators (click on the button below the image).

Theoretical Considerations

If you are interested in a more robust approach to integrating GenAI into instruction, you might consider adopting a framework such as the one developed by Jiahong Su and Weipeng Yang (2023). Their approach to integrating GenAI follows four basic steps, which are illustrated tin the diagram to the right.

  1. Identify the desired outcomes
  2. Determine the appropriate level of automation
  3. Ensure ethical considerations
  4. Evaluate the effectiveness

These steps look obvious, but they are not. Identifying the desired outcomes, for example, opens a vast range of possible GenAI applications in the classroom, many of which may be constrained by preconceived notions about AI technology.

Identifying Desired Outcomes

Of course, it is legitimate to use GenAI to assist with traditional academic tasks, such as summarizing documents. But it is also important to explore ways in which GenAI may challenge and alter traditional learning goals and teaching methods.

In other words, the “desired outcome” here may be to have students engage GenAI platforms in playful and creative ways, rather than having them complete a traditional, well-defined task. That said, it is still important to identify our desired outcomes for GenAI applications as best we can, appealing to tradition as well as innovation.

Determining the Level of Automation

Obviously, determining the appropriate level of automation will depend on the first step. Hence, to the degree that the first step is complicated, so is the second. In redesigning assignments such as writing an essay, we may determine the level of automation based on the specific skills we want students to learn. For example, we may want them to brainstorm ideas with the help of GenAI but write an outline on their own (or vice versa), and hence our restrictions on using GenAI (automation) depend on how we prioritize aspects of the learning process.

But suppose we want to explore how students interact with GenAI more freely, e.g., by using the technology as they see fit to come up with a marketing strategy for a new product. In that case, we may wish to put fewer restrictions on the students’ use of GenAI and monitor the entire process (through student reports).

The level of automation is linked to our goals, and the creative exploration of how GenAI may be applied in instruction will probably require more automation rather than less – meaning that students should not have many restraints placed on their usage. No course will be designed entirely around this unrestrained usage, but there are good pedagogical reasons for creating spaces within a course where this type of playfulness is useful and relevant.

What about ethical considerations?

The challenge of ensuring that ethical considerations are met is related to a more basic challenge of understanding what ethical considerations are to begin with. There are different ethical frameworks, and those frameworks identify different issues as worthy of consideration. Is it a person’s intention that matters most when we consider whether an action is right or wrong, the consequences of their action, or something else? Do ethical considerations change with broader social changes (e.g., disruption to traditional practices and beliefs as the result of technology, for example), or are there fixed, universal ethical standards? Generally, people lack what we might call “ethics literacy,” which means that ethical questions about GenAI are obscure and confusing.

That said, there is no shortage of strong feelings about right and wrong when it comes to things like students using GenAI to write an essay or complete an exam. Yet our feelings are often reflective of longstanding social norms, personal desires, or habits rather than well-reasoned ethical principles. And these norms and habits are being disrupted by GenAI, which means our ethical intuitions might well be reflective of resistance to change rather than genuine acts of wrongdoing on the part of the student.

In the context of radical change, how do we get our moral bearings? What is our ethical North Star? We noted above in the section on AI literacy that the first ethical competency is to “do no harm.” We can build on this maxim by adopting a flexible and humanistic framework called “Care Ethics.” One important element of Care Ethics in the context of GenAI adoption is its directive to uphold the dignity and humanity of the person.

Care Ethics and GenAI: A Framework for Application

Core Principles of Care Ethics

Care Ethics, also known as the Ethics of Care (EoC), is a moral theory that emphasizes the importance of interpersonal relationships and the virtues of benevolence, compassion, and empathy. This framework contrasts with more traditional ethical theories that focus on abstract principles and rules, such as deontology and utilitarianism.

Among its other commitments, Care Ethics

  • Emphasizes the interdependence of people and the importance of relationships in ethical decision-making. Human agency is seen through the lens of our interconnectedness rather than individualism.
  • Elevates emotions and empathy as critical components of moral reasoning. Emotions are not seen as irrational but as vital to understanding and responding to the needs of others.
  • Recognizes that ethical decisions are context-dependent and must consider the specific circumstances and relationships involved. It rejects one-size-fits-all solutions in favor of tailored responses to unique situations.
  • Prioritizes responding to the needs of others, especially those who are vulnerable or dependent, in tangible ways.
  • Focuses on the quality of relationships and the mutual responsibilities they entail. It underscores the importance of nurturing trust, respect, and mutual concern through our decisions and actions.

Care Ethics thus places human beings at the center of the ethical sphere and prioritizes interpersonal connection, compassion, and well-being. From these commitments, we can derive the following guiding principles for using AI and GenAI in the higher education context:

  1. Prioritizing Student and Faculty Well-Being: Ensure that AI and GenAI applications do not hamper the emotional, mental, and physical well-being of students and faculty (beyond our natural stress-response to change).
  2. Fostering Inclusive and Respectful Relationships: Use AI and GenAI in a manner consistent with respect for the exchange of free ideas and the value of diverse perspectives.
  3. Context-sensitive Implementation: Implement AI solutions tailored to the specific needs and contexts of different educational environments, i.e., the unique characteristics of each program, course, and student population.
  4. Transparency and Accountability: Ensure that AI and GenAI adoption and use are transparent and accountable in their operations and decision-making processes.
  5. Encourage Benevolence: Employ AI and GenAI in ways that enhance human creativity, integral reasoning, and empathy.

Like the details about AI literacy, this AI-Ethics framework is malleable and should be adapted to the specific needs of different instructional areas.

For more detailed information on Care Ethics, see the link below:

Internet Encyclopedia of Philosophy

Remember: As educators the most important relationships to cultivate and preserve are with our students. Engaging students as partners in applying and evaluating GenAI can be a catalyst for building healthy, meaningful bonds with those we are committed to serve.

Evaluating the Effectiveness of GenAI in the Classroom

How does one evaluate the effectiveness of GenAI use-cases in higher education? The answer depends on what our desired outcomes are and whether GenAI has assisted us in achieving those outcomes.

Mastery of discipline-specific skills and concepts: If GenAI is used to promote skills and concepts unrelated to GenAI (e.g., knowledge of a particular historical period or how to diagnose an illness), then we can use traditional assessments to measure student mastery. But we may also want to measure the impact of GenAI on the learning process itself, such as it's potential to reduce students' time investment or use of resources (as well as such things as student satisfaction). Student surveys, in-class discussions, and other conventional tools may be used to collect data on these and other influences of GenAI on learning.

Mastery of AI Literacy: Measuring GenAI literacy as an outcome of GenAI applications in the classroom can be done using conventional assessment methods, such as quizzes, projects, or portfolios. We can get students to reflect on their GenAI use in various ways, including through dialogue, class discussion, and written reflections. There is nothing inherently problematic about assessing AI literacies unless our own knowledge as instructors lags behind that of our students, and this is reflected in the assessment tools we develop (which is a reason for remaining AI literate ourselves).

Unexpected Outcomes: Given the nature of GenAI, there will be outcomes of our instructional applications that we can't predict. Some may be advantageous and suggest new instructional practices or literacies, others may be deleterious. Documenting these unexpected outcomes, analyzing their connections to GenAI implementation, and attempting to replicate or prevent them (depending on whether they promote or inhibit learning, for example) will be an important part of the evaluation process. Proceed thoughtfully.

Plagiarism and Academic Integrity

We've already touched on the issue of student cheating, and we've suggested the disruptive nature of GenAI requires that we rethink traditional ideas about academic integrity. However, part of the conversation about evaluating the effectiveness of GenAI implementation is inseparable from the question of whether students are using this technology ethically. For this reason, we recommend appending the following considerations to our earlier recommendations about AI ethics:

  1. Make sure your syllabi contain policies related to using GenAI both inside and outside the classroom (see the Assessment section below for resources). Provide reasonable, student-friendly justifications for your policies rather than blanket prohibitions or unexplained license for students to use GenAI as they see fit. Frame these policies in a way that doesn't stifle dialogue with your students about unanticipated applications, and be open to the possibility of using GenAI in assignment design and assessment. We want to promote AI literacy where possible.
  2. Enforce syllabus policies with compassion and an appropriate level of humility. Students are socialized in digital environments where the concern for truth, intellectual property, and citation is almost completely ignored, so they are habituated to prioritize other things (such as emotions, popularity, and influence). While ideally, students should enter our classrooms knowing what counts as plagiarism and academic integrity, the reality is that many don't. And where GenAI is concerned, the lines between our own work and the work of machines can be blurry. Where possible, invoke syllabus policies as a teaching moment; where not possible, demonstrate compassion in the enforcement of those policies (life is hard enough as it is).
  3. Be extremely careful in relying on GenAI detection tools to support accusations of cheating. These tools vary in their reliability, and they typically fail when students use some ingenuity or other AI tools designed to help them avoid detection. Rather than resorting to GenAI bans and detection tools, it might be worthwhile to show students how to use GenAI responsibly by integrating it into classroom activities and assignments.

Remember: The concept of academic integrity does not exist in a vacuum; it is linked to methods of teaching and scholarship. These methods change with changes in technology, which implies that what counts as “good behavior” will depend on the broader social and technological context in which we live and study. (To illustrate, think about how current ideas of intellectual property might look to members of a culture that views knowledge as a public resource, such as one that relies on oral storytelling rather than the printed word to hand knowledge down from one generation to the next). GenAI is shifting our understanding of intelligence, knowledge, intellectual property, and pedagogy. With this shift, our standards of academic integrity will change.

We recommend partnering with students and colleagues to adjust expectations about the responsible use of GenAI. New, stable expectations will emerge from a shared commitment to thoughtful applications.

Here are a couple of helpful resources in developing syllabus policies:

Data Security Concerns

The use of generative AI platforms like ChatGPT-4o in higher education brings up several data security concerns. One primary issue is the potential exposure of sensitive information. As employees and students interact with AI systems, there is a risk that personal data, academic records, and proprietary research could be inadvertently shared and stored by these platforms. Such data, if not adequately protected, could be susceptible to breaches, unauthorized access, and misuse. Additionally, the reliance on third-party AI providers raises questions about data sovereignty and control, as institutions may not have full visibility into where and how their data is stored and processed.

Third-party service providers such as Grammarly can provide a level of data security that helps to ensure sensitive information is stored and used appropriately. Where an enterprise-level approach to using GenAI is lacking, it is incumbent on everyone—every IRSC employee—to understand the potential issues and use GenAI in a manner that doesn’t expose proprietary or sensitive information.

Recommendations:

  1. Avoid Using GenAI for Sensitive Information: Refrain from using GenAI platforms for work-related processes that involve sensitive information such as student identification numbers, names, or any data that can be traced back to individuals. Alternatively, be sure to scrub any data you are using of personally identifying information.
  2. Be Mindful of the Lack of Anonymity: Using GenAI does not ensure anonymity; your prompts are linked to your IP address, and your data can be used (at least in principle) to tailor feedback to your particular tastes and interests. Prompt appropriately.
  3. Stay Informed and Trained: Participate in training on data security best practices and the ethical use of GenAI (IRSC employees have access to Percipio, which has training on AI and data security, e.g., “Impact of AI Models on Privacy and Data Security”; other trainings will be announced throughout the academic year).

Learning Outcomes, Assessment, and Assignment (Re-)Design

The foregoing material naturally rolls up into considerations of program and course learning outcomes, assessing GenAI-augmented learning, and assignment design that supports traditional and AI-related literacies. Some of these topics have been explicitly covered above, but it may be useful to consolidate the relevant reflections here.

Learning Outcomes

As we well know, learning outcomes are what we expect students to know and be able to do at the end of a learning activity (an assignment, a course, or an academic program). Typically, courses have a set of discipline-relevant outcomes reflecting knowledge of our subjects. They also have outcomes that are relevant to general skills we expect students to develop over the course of their academic careers, such as critical thinking, written and oral communication, information literacy, etc.

Under the influence of GenAI, many of our learning outcomes will need adjustment to reflect GenAI literacies. There is a lot of flexibility around how to make these adjustments: Faculty might come up with a new list of GenAI-related outcomes to add to their current learning outcomes; they might adjust existing outcomes to reflect aspects of GenAI literacy; they might do some combination of both. Whatever one decides, the key is ensuring that learning outcomes evolve with changes to instructional practice, ideas about knowledge, and the skills students need to excel in the workforce or other academic programs.

Assessment

Assessing student learning is also going to change because of the influence of GenAI. How we assess learning is linked to our learning outcomes and teaching methods; as the latter change, so will the former. Moreover, we will almost certainly use GenAI platforms to assess student work (paying close attention to data security, privacy, and transparency). For example, it is fairly easy to build a GPT using OpenAI's GPT Creator to grade assignments according to rubrics, provide written feedback, and identify common challenges students may face that are not apparent to us. Follow the link below to a tutorial on how to build your own GPT (you will need the paid version to build a GPT, but students can access it without the paid version):

Below is a link to a prototype of a GPT that evaluates student work for mastery of a program learning outcome titled “Communication.” The GPT was trained on a rubric developed by faculty and then tuned using sample student artifacts (artificial data). The settings of OpenAI's GPT Creator allow the author to prevent the GPT from using the artifacts from being used to train ChatGPT4o, which adds a layer of data privacy. (Student names and ID numbers should be removed from artifacts before they are reviewed, regardless.)

It is also easy to built GPTs that can serve as tutors to our students for specific classes and subjects, such as the Humanities Helper GPT built to support students in an introduction to the humanities class taught at Indian River State College.

(Try using the following prompt with Humanities Helper to learn about connections between ancient Greek tragedy and early Western philosophy: “Hi. Can you explain how ancient Greek tragedy, such as Sophicles' Antigone, is related to discussions of early Greek philosophy, e.g., that of Socrates and Plato?”)

If you would like to research the topic of GenAI and assessment, try using this custom GPT built specifically for academic research:

For more details about GenAI and assessment, the following report is quite helpful:

Here's a scholarly article worth reading, which was identified by Microsoft Copilot and published in the International Journal of Educational Technology, titled "A scoping review on how generative artificial intelligence transforms assessment in higher education." Here's the Scholar GPT summary of its findings:

The article reviews how generative AI, especially tools like ChatGPT, is transforming higher education assessment. It explores opportunities and challenges for students, teachers, and institutions. For students, GenAI offers perceived unbiased, immediate, and diverse feedback, and self-assessment, but raises academic integrity concerns. Teachers are encouraged to innovate assessment methods, improve AI literacy, and balance GenAI with traditional assessments. Institutions should redesign policies, promote interdisciplinary learning, and provide professional development for teachers. Overall, the review emphasizes the need to modernize assessment practices to align with generative AI's capabilities and implications.

Finally, if you'd like to go down the video-rabbit-hole, you can start with a reflection by the University of Birmingham's Dr. Tarsem Singh Cooner:

Assessing “Traditional” Learning

No doubt, there are aspects of teaching and learning that should be insulated from GenAI, especially when we need to be sure that students have a level of mastery they can apply in contexts without GenAI. Assessing student mastery may then require more familiar forms of assessment, such as in-class projects, proctored assessment, and student interviews.

It is important to explain to students why these forms of assessment remain important and how these tools are beneficial to the learning process. Clear statements in course syllabi about the appropriate uses of GenAI and the value of traditional assessment are essential, but it is also important to explain the rationale in assignment instructions and in class.

How much effort should we put into preventing cheating, and what tools should we use? This is a complex and difficult question. Current GenAI detection tools yield false positives, a point emphasized by Jose Antonio Bowen and C. Edward Watson in their book Teaching With AI: A Practical Guide to a New Era of Human Learning. The authors write:

Even with good AI-detection software, you still need to consider their effectiveness and the implications of false positives. Even if the rate of false positives is low, are you comfortable with that level of collateral damage, and which student subpopulations it is most likely to hit? The time and effort that you and your institution will devote to finding AI is another consideration: most of us did not grow up wanting to be enforcement officers.” (Page 125)

Redesigning Assignments for GenAI Literacy

Assignment redesign requires a lot of thought. Here are some steps that may be helpful:

  1. Compare existing assignment learning outcomes to GenAI literacies and determine which GenAI outcomes to incorporate.
  2. Identify the elements and sequencing of the assignment that best support the revised/new learning outcomes, noting precisely where and how GenAI should be used by students.
  3. Determine how learning will be assessed (e.g., through an in-class exam, independent project, or ePortfolio).
  4. Test the assignment structure by using GenAI to complete the portions being assigned to students, and make adjustments as necessary.
  5. Provide a thorough explanation of how and why GenAI is to be used by students in the assignment instructions and/or supporting documentation. Offer sample prompts if appropriate, and remind students about the necessity of evaluating GenAI output for accuracy and bias.
  6. Recommend a GenAI platform to use for the assignment, and remind students about the dangers of including personal information in prompts (consider using OpenAI's GPT Creator to tailor a bot for the purposes of the assignment, thereby mitigating data privacy concerns and bias).
  7. Integrate the assignment into the course, adjust syllabus policies related to GenAI if necessary, and weight the assignment appropriately in the course's grading scheme (it may be worth making new assignments fairly low-stakes until one gets a sense of the effectiveness of the assignment design).

Here are some helpful resources to explore as you consider how to redisign assignments to accommodate GenAI-augmented learning:

University of Michigan: https://genai.umich.edu/resources/faculty/redesigning-assessments
Conclusions

Given the fast-paced changes we are seeing across industries as a result of GenAI's ubiquity and power, the conclusions to be drawn from discussions about AI, GenAI, and higher education must be about process and orientation as much as practices. The technology is evolving, and new applications are being marketed daily. Consequently, we must adjust our tolerance level for disruption and chaos, make peace with the reality of quick and disruptive change, and do the best we can to adapt in ways that emphasize the centrality and value of human interests.

GenAI his here to stay, and it's capacity to produce human-like, novel responses to prompts will continue to improve. We must adapt to keep pace and to direct the use of GenAI in ways that remain consistent with our enduring values. Humility and curiosity will be foundational for healthy engagement with this technology, as will our willingness to engage students and colleagues in good-faith discussions about where we are and where we're headed.

We have more questions than answers, but that has always been the case at the cutting edge of change. We can pursue the answers together, one conversation at a time.

Online Resources

The 2024 AI Index Reportpublished by the Stanford Human-Centered Artificial Intelligence Institute (HAI): https://hai.stanford.edu/: The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence.

The Rundown AI is a digital platform that provides comprehensive updates and insights into the latest developments in artificial intelligence. It targets a broad audience, offering news and information on significant AI events, breakthroughs, and collaborations, such as those involving major tech companies like Apple, Microsoft, and Adobe. The website is a hub for AI enthusiasts and professionals looking to stay informed with the most current trends and advancements in the field. For more detailed information, you can visit their website at https://www.therundown.ai/

The Florida AI Learning Consortium (FALCON), is an association of Florida colleges and universities whose vision is to empower and transform higher education through the integration of AI, fostering a future where technology enhances learning, teaching, and research in a manner that is ethical, equitable, and forward-thinking. 

The Content Authenticity Initiative is an industry-driven effort to reduce the problem of deep fakes and other forms of deception using AI. Here's the description from it's website: “The Content Authenticity Initiative is a cross-industry community of over 3,000 members including civil society, media, and technology companies, founded by Adobe in 2019. Through education and free open-source tools based on the C2PA standard, our movement's goal is to make content authenticity and transparency scalable and accessible.”

GenAI Workgroup

Bodi Anderson (Co-Chair)

Larisa Ray (Co-Chair)

Bruce Fraser

Debbie Valentie

Sean Perry

Suchi Banerjee

Hans Sorenson

Mike Pelitera

Jason Litton

Troy Sheerer

Ricky Carroway

Jason Mastros

Peter Craft

© 2024 Indian River State College. All rights reserved. No part of this website may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law

Disclaimer: Some information included above has been summarized, augmented, or rewritten using GenAI. All information is provided in good faith and in the interest of promoting thoughtful, independent engagement with AI and GenAI on the part of the user. The authors make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, or completeness of the information.

This resource was created by Dr. Bruce W. Fraser and Dr. Larisa Ray.

  • Dr. Bruce W. Fraser, Director of the Institute for Academic Excellence: (772) 462-7532 (bfraser@irsc.edu)
  • Dr. Larisa Ray, Master Instructor of Economics: (772) 462-7069 (lray@irsc.edu)
NextPrevious