
Guidelines
Many of us are old enough to remember when PCs, cellphones and the internet were new, and to some they seemed like fads whose future impact on our lives was not readily apparent. It’s easy to dismiss today’s Generative AI tools like ChatGPT as another overhyped fad. Unlike those earlier technologies that evolved over decades, the pace at which Generative AI (GenAI) is advancing is proving to be much faster and just as revolutionary.
So, should UTA’s staff use it, and if so, how should we use it? The answer to the first part of that questions is a resounding, “Yes.” However, it is important to follow guidance and familiarize yourself with UTA’s policies and resources before diving in head first.
UTA Advice on Generative AI – Artificial Intelligence at UTA
The use of generative artificial intelligence tools, including ChatGPT, Bard, and DALL-E (collectively “GenAI Tools”), has increased significantly in recent years and these GenAI Tools are being utilized by many different educational institutions to help fulfill business, research, or academic functions. While UTA is exploring all the ways in which GenAI can help us achieve our vision and mission, we must consider the inherent risks of using this technology.
This advisory, jointly produced by the Information Security Office, the Office of Legal Affairs and the AI Steering Committee gives the following guidance for UTA employees on how to use GenAI Tools safely, without putting personal, proprietary or university data at risk.
ALLOWABLE USE:
- GenAI Tools may be used with data that is publicly available or defined as Public by UTA’s Data Classification Standard (Login required).
- In all cases, GenAI Tools use must be consistent with UTA’s and other relevant policies.
- Consistent with this guidance, faculty should state in the course syllabus any allowed or limited use of GenAI Tools within the course within the context of this guidance.
- Any use of GenAI tools installed on a computer controlled by UTA or under a vendor contract that specifically protects university data and its use in the AI model may be designated as a permittable tool for use with non-public data classifications in accordance with relevant policies.
PROHIBITED USE:
- In general, student records subject to FERPA, health information, proprietary information, and any other information classified as Confidential or Controlled under UTA’s Data Classification Standards must not be used within public AI models.
- Gen AI Tools of any sort cannot be used for any activity that would be illegal, unethical, fraudulent or violate any state or federal law or UTA or UT System policies.
LIABILITY:
- It is imperative that UTA employees understand that all content entered into, or generated by, a public GenAI tool are available to the public. There is currently no inherent data security and privacy protections when using a public GenAI tool. Consequently, the use of public AI Tools at this time could expose individual users and UTA to the potential loss and/or abuse of sensitive data and information.
- Many publicly available GenAI tools require click-through agreements. Click-through agreements are contracts. Individuals who accept click through agreements without authority may face personal consequences, including responsibility for compliance with terms and conditions. It is recommended to process all GenAI Tools through UTA’s TAP process to ensure appropriate use.
ADDITIONAL GUIDANCE:
- For further guidance on the use of GenAI Tools, please visit this AI website at ai.uta.edu.
- For security concerns, please contact the Information Security Office at security@uta.edu.
ADDITIONAL SUGGESTIONS
Here are a few additional suggestions on making the most of GenAI in your business or operational area from our AI Council:
- Start Using it Now – Many of your coworkers across UTA are already leveraging GenAI in their work. As you get started, please keep in mind any guidance listed on this page. An easy way to try GenAI is to open your internet browser and navigate to Microsoft Copilot.
- Fact Check Everything – GenAI is improving daily, but it can still get things wrong and will assert its errors with great confidence. Review and check all AI-created content for accuracy. If the content is beyond your level or area of expertise, seek a human being with that expertise to review the content. Do not assume that it is true because it sounds true.
- Transparency – If you use GenAI in your work, whether that’s language, graphics, video or coding, you should consider transparently saying so, e.g. “This graphic was generated using Adobe Firefly”, or “FYI, I proofed and revised this document with Google Gemini”. Your transparency may also encourage others to give AI tools a try.
- Do Not Conceal AI Use from colleagues, students or others. As an example, using AI tools to transcribe meeting notes and summarize action items will be a huge productivity boost for many of us, but in the same way that we notify others that we are recording a Teams or Zoom meeting, we should be transparent if we have an AI taking notes. And be aware, it’s not the same as a recording. The AI can make mistakes or misunderstand different accents. It is important to review these notes for accuracy.
- Know if Your Tools use GenAI – If you manage the selection of or use of tools in your area, it’s important you understand if or how your vendors are using GenAI. If it’s used, how is UTA’s data being secured and how they are avoiding errors and bias in its output? Use UTA’s Technology Acquisition Helper when selecting new solutions for your area.
POLICIES
- AI Policies – AI Feature Certification Policy (WIP)