Generative AI Guidelines for Instruction

Possibilities and Limitations of Generative AI in the Classroom: UTA Guidelines for Using Generative AI in Instruction to Achieve Student Learning Outcomes (SLOs)

GenAI in the Classroom preview of PDF

Welcome to the UTA guidelines for the considered use of generative AI (Gen AI) in instruction. This resource is crafted to foster informed decisions about leveraging GenAI in teaching and learning within UTA’s diverse academic landscape. Our aim is to illuminate the spectrum of possibilities—from cautious restraint to enthusiastic adoption—always with a clear eye on how these technologies can serve or, at times, detract from achieving the specific Student Learning Outcomes (SLOs) in your courses.

By embracing a balanced perspective, this document endeavors to support instructors in making judicious choices about when and how to integrate GenAI into their pedagogy, as well as when to exclude it, in favor of methods that better align with their educational objectives. Whether considering a nuanced incorporation of GenAI tools or contemplating a comprehensive application, the guidelines within will help faculty navigate these decisions in alignment with UTA’s standards of academic integrity.

We encourage faculty to consult the Table of Contents to find discussions and recommendations most pertinent to their discipline and teaching goals. It is our hope that this guide will serve as a dynamic resource in their instructional toolkit, enabling them to tailor use of GenAI in ways that are most conducive to fulfilling the SLOs of each course taught at UTA.

Researcher Guidance for the Use of Artificial Intelligence in Research

robotic researcher owl running an experiment

Purpose

The use of Generative Artificial Intelligence (GAI) can be a powerful tool for helping members of the UTA research community be more effective, productive, and innovative in their work. At the same time, GAI can be used in a way that may result in unintended negative consequences or that are inappropriate to current academic norms. Uses of GAI in research may involve proposal preparation, progress reports, data/statistical analysis, graphic generation, etc. 

Many standards, regulations, and policies are being contemplated or actively developed at federal, state, or institutional levels as the use and impact of GAI evolves. This guidance is intended to help frame the known requirements in using GAI, the safeguards needed for its ethical and appropriate use, and how to utilize or properly reference GAI used in research.   


Guiding Principles – Allowable and Prohibited Use

Consistent with the UTA Advice on Generative AI, there are general and contextual understandings on the appropriate, shortcomings/potential risks, and prohibited use of GAI in research.  Fundamentally, there must be an understanding that AI is a Technology Tool and that there are safeguards or prohibitions on the use of AI as it relates to the privacy, security, and confidentiality of data GAI may use or contribute to in public domain or within a secure environment.  Therefore, in principle: 

  • Use and develop GAI tools in a manner that is ethical, transparent, and mitigates potential biases.  One should note the known biases of using AI.  For example, NIST’s report on Statistical, Human, and Systemic Biases in AI.  
    • Bias in data. Bias in data, and consequently bias in the AI system’s output, could be a major issue because Generative AI is trained on large datasets that you usually can’t access or assess, and may inadvertently learn and reproduce biases, stereotypes, and majority views present in these data. Moreover, many Generative AI models are trained with overwhelmingly English texts, Western images and other types of data. Non-Western or non-English speaking cultures, as well as work by minorities and non-English speakers are seriously underrepresented in the training data. Thus, the results created by Generative AI are definitely culturally biased. This should be a major consideration when assessing whether Generative AI is suitable for your research.
  • Use and develop GAI tools in a manner that promotes institutional and research integrity, including scientific rigor and reproducibility. Do not rely on GAI tools in the stead of your own critical thinking and sound judgment.
    • Ethical issuesData privacy is more complicated with Generative AI when you don’t know for sure what Generative AI does with your input data. Transparency and accountability about the Generative AI’s operations and decision making processes can be difficult when you operate a closed-source system.
    • Plagiarism. Generative AI can only generate new contents based on, or drawn from, the data that it is trained on. Therefore, there is a likelihood that they will produce outputs that are similar to the training data, even to the point of being regarded as plagiarism if the similarity is too high. As such, you should confirm (e.g. by using plagiarism detection tools) that Generative AI outputs are not plagiarized but instead “learned” from various sources in the manner humans learn without plagiarizing. 
  • Users of GAI are responsible and accountable for any actions or outcomes that result from their use and development of GAI tools.
    • AI hallucination. Generative AI can produce outputs that are factually inaccurate or entirely incorrect, uncorroborated, nonsensical or fabricated. These phenomena are dubbed “hallucinations”. Therefore, it is essential for you to verify Generative AI-generated output with reliable and credible sources.
Allowable Use
Recognizing AI is a Technology Tool for Research

In any use of GAI, there should be self-awareness and recognition that AI is a tool for research.  It is a tool of human design and use like any technology.  It does not supplant nor surpass human oversight or context of being an independent tool used by humans for its’ benefit.  Therefore, in principle:

  • Be alert to the potential for research misconduct (i.e., data falsification, data fabrication, and/or plagiarism) when using and developing GAI tools.  Knowing the methods and data collection use in GAI. 
  • Disclose use of GAI tools when appropriate or required (e.g., a journal that will accept a manuscript developed using GAI, provided such use is disclosed).
  • Ensure any experimental data used in connection with an GAI tool are accurate, relevant, legally obtained and, when applicable, have the consent of the individuals from whom the data were obtained.
  • Make sure you can clearly explain how any GAI tools you create were developed (e.g., describe the data and machine learning models or deep learning algorithms used to train a Large Language Model AI tool).
    • Prompt Engineering. The advent of Generative AI has created a new human activity – prompt engineering – because the quality of Generative AI responses is heavily influenced by the user input or ‘prompt’. There are courses dedicated to this concept (see our “other training” page). However, you will need to experiment with how to craft prompts that are clear, specific and appropriately structured so that Generative AI will generate the output with the desired style, quality and purpose.
    • Knowledge Cutoff Date. Many Generative AI models are trained on data up to a specific date, and are therefore unaware of any events or information produced beyond that. For example, if a Generative AI is trained on data up to March 2019, they would be unaware of COVID-19 and the impact it had on humanity, or who is the current monarch of Britain. You need to know the cutoff date of the Generative AI model that you use in order to assess what research questions are appropriate for its use.
  • Be mindful of how sampling bias in training data and difficulties in interpreting output can be significant roadblocks for the ethical and transparent usage of GAI.
  • Make sure any GAI tools you use or develop are subject to human oversight (e.g., humans are involved in the design, development, and testing of the tool).
  • Subject any GAI tools you develop with rigorous quality control measures (e.g., test for accuracy and reliability).
  • Exercise caution regarding vendor claims about GAI-enabled products, as definitions of GAI and how it is implemented may vary. GAI-enhanced products may not always outperform non-GAI alternatives.
    • Model Continuity. When you use Generative AI models developed by external entities / vendors, you need to consider the possibility that one day the vendor might discontinue the model. This might have a big impact on the reproducibility of your research. 
Privacy, Security and Prohibited Use
  • Privacy, Security, and Rules Apply to GAI Tools – be cognizant to identify and protect the privacy and security of individuals when using and developing GAI tools.
    • Security. As with any computer or online system, a Generative AI system is susceptible to security breaches and malicious attacks. For example, a new type of attack, prompt injection, deliberately feeds harmful or malicious contents into the system to manipulate the results that it generates for users. Generative AI developers are designing processes and technical solutions against such risks (for example, see OpenAI’s GPT4 System Card and disallowed usage policy. But as a user, you also need to be aware what is at risk, follow guidelines of your local IT providers, and do due diligence with the results that a Generative AI creates for you.
  • Do not use GAI tools when prohibited (e.g., a sponsor that does not allow use of GAI for peer review).
  • Do not provide or share intellectual property or confidential/sensitive data with GAI tools that incorporate users’ content into their publicly accessible models.
  • Report any potential data breaches or confidentiality lapses involving GAI tools to the appropriate UTA authority.
  • Any acquisition, deployment, or agreement involving a vendor or vendor product using GAI must follow UTA’s established legal, information security, audit, and procurement rules.
  • In general, student records subject to FERPA, health information, proprietary information, and any other information classified as Confidential or Controlled under UTA’s Data Classification Standards must not be used within public AI models.

Gen AI Tools of any sort cannot be used for any activity that would be illegal, unethical, fraudulent or violate any state or federal law or UTA or UT System policies.

Federal Agency Use of GAI and Requirements for Proposals

National Science Foundation (NSF)

The National Science Foundation recently announced in its “Notice to Research Community:  Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process” that NSF reviewers are prohibited from uploading any proposal content or review records to non-approved GAI tools (they must be behind NSF’s firewall) out of concern for potential violations of confidentiality and integrity principles of the merit review process. Use of GAI in NSF proposals should be indicated in the project description. Specifically, it states: “Proposers are responsible for the accuracy and authenticity of their proposal submission in consideration for merit review, including content developed with the assistance of generative AI tools. NSF’s Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which includes fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. Generative AI tools may create these risks, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results.”

National Institutes of Health (NIH)

NIH has issued a Notice: The Use of Generative Artificial Intelligence Technologies is Prohibited for NIH Peer Review Process along with a set of FAQs for the Use of Generative AI in Peer Review. Although NIH specifically prohibits GAI in the peer review process, they do not prohibit the use of GAI in grant proposals. They state an author assumes the risk of using an AI tool to help write an application, noting “[…] when we receive a grant application, it is our understanding that it is the original idea proposed by the institution and their affiliated research team.” If AI generated text includes plagiarism, fabricated citations or falsified information, the NIH “will take appropriate actions to address the non-compliance.”

Reference the Use of GAI in Proposals, Papers, and Reports

GAI should not be listed as a co-author, but the use of Generative AI should be disclosed in papers, along with a description of the places and manners of use. Typically, such disclosures will be in a “Methods” section of the paper. See the Committee on Publication Ethics’ Authorship and AI tools webpage for more information. If you rely on GAI output, you should cite it. Good citation style recommendations have been suggested by the American Psychological Association (APA) and the Chicago Manual of Style.

Generative AI Guidance for UTA’s Business and Operational Teams

Image generated using OpenAI’s DALL·E 3 by Michael Schmid on 02/05/2024

Many of us are old enough to remember when PCs, cellphones and the internet were new, and to some they seemed like fads whose future impact on our lives was not readily apparent. It’s easy to dismiss today’s Generative AI tools like ChatGPT as another overhyped fad. Unlike those earlier technologies that evolved over decades, the pace at which Generative AI (GenAI) will advance is going to be much faster and just as revolutionary.

So, should UTA’s staff use it, and if so, how should we use it? The answer to the first part of that questions is a resounding, “Yes.” But… and you knew there was a “but” coming, there are a couple of caveats and some guidance I’d like to recommend. First of all, please visit ai.uta.edu often for new posts from your peers at UTA about AI use in different parts of our institution. Here are a few more suggestions on making the most of GenAI in your business or operational area:

  • Start Using it Now – Many of your coworker across UTA are already leveraging GenAI in their work. As you get started, please read and keep in mind our guidance on how to use GenAI Tools safely. An easy way to give GenAI a try is to open the Edge browser on your PC and navigate to Microsoft Copilot. It’s free to use the most advanced version of ChatGPT or generate an image using DALL-E 3 via that browser interface.
  • Fact Check Everything – GenAI is improving daily, but it can still get things wrong and will assert its errors with great confidence. Review and check all AI created content for accuracy. If the content is beyond your level or area of expertise, seek a human being with that expertise to proof the content. Do not assume because it sounds true that it is.
  • Transparency – If you use GenAI in your work, whether that’s language, graphics, video or coding, you should consider transparently saying so, e.g. “This graphic was generated using Adobe Firefly”, or “FYI, I proofed and revised this document with Google Gemini”. Your transparency may also encourage others to give AI tools a try.
  • Do Not Conceal AI Use from colleagues, students, or others. As an example, using AI tools to transcribe meeting notes and summarize action items will be a huge productivity boost for many of us, but in the same way that we notify others that we are recording a Teams or Zoom meeting, we should be transparent if we have an AI taking notes. And be aware, it’s not the same as a recording. The AI can make mistakes or misunderstand different accents.
  • Know if Your Tools use GenAI – If you manage the selection of or use of tools in your area, it’s important you understand if or how your vendors are using GenAI. If it’s used, how is UTA’s data being secured and how they are avoiding errors and bias in its output. Use UTA’s Technology Acquisition Helper when selecting new solutions for your area.

UTA Advice on Generative AI

Firefly safe computing for faculty and researchers

The use of generative artificial intelligence tools, including ChatGPT, Bard, and DALL-E (collectively “GenAI Tools”), has increased significantly in recent years and these GenAI Tools are being utilized by many different educational institutions to help fulfill business, research, or academic functions.  While UTA is exploring all the ways in which GenAI can help us achieve our vision and mission, we must consider the inherent risks of using this technology.

This advisory, jointly produced by the Information Security Office, the Office of Legal Affairs and the AI Steering Committee gives the following guidance for UTA employees on how to use GenAI Tools safely, without putting personal, proprietary or university data at risk.

Allowable Use:

  • GenAI Tools may be used with data that is publicly available or defined as Public by UTA’s Data Classification Standard (Login required).
  • In all cases, GenAI Tools use must be consistent with UTA’s and other relevant policies.
  • Consistent with this guidance, faculty should state in the course syllabus any allowed or limited use of GenAI Tools within the course within the context of this guidance.
  • Any use of GenAI tools installed on a computer controlled by UTA or under a vendor contract that specifically protects university data and its use in the AI model may be designated as a permittable tool for use with non-public data classifications in accordance with relevant policies.

Prohibited Use:

  • In general, student records subject to FERPA, health information, proprietary information, and any other information classified as Confidential or Controlled under UTA’s Data Classification Standards must not be used within public AI models.
  • Gen AI Tools of any sort cannot be used for any activity that would be illegal, unethical, fraudulent or violate any state or federal law or UTA or UT System policies.

Liability:

  • It is imperative that UTA employees understand that all content entered into, or generated by, a public GenAI tool are available to the public. There is currently no inherent data security and privacy protections when using a public GenAI tool. Consequently, the use of public AI Tools at this time could expose individual users and UTA to the potential loss and/or abuse of sensitive data and information.
  • Many publicly available GenAI tools require click-through agreements. Click-through agreements are contracts. Individuals who accept click through agreements without authority may face personal consequences, including responsibility for compliance with terms and conditions. It is recommended to process all GenAI Tools through UTA’s TAP process to ensure appropriate use.

Additional Guidance:

  • For further guidance on the use of GenAI Tools, please visit this AI website at ai.uta.edu.
  • For security concerns, please contact the Information Security Office at security@uta.edu.

Sincerely,

Cheryl Nifong

Chief Information Security Officer
Information Security Office
security@uta.edu