With a growing focus on AI in society and across UTA, many are exploring different learning paths and certification opportunities.
To access certification opportunities, log in to your LinkedIn Learning Account. On the left navigation menu, click on Certifications. The current offerings are from Microsoft and included with your UTA employment. There are also AI courses in the Certification Prep category, but exams and certifications are not included.
There are also several Learning Paths available on LinkedIn Learning that are focused on AI. Below is a list of current offerings.
Faculty across several disciplines recently collaborated on a collection of practical, step-by-step AI-infused teaching activities. AI-Powered Education: Innovative Teaching Strategies to Elevate Student Learning serves as a hands-on resource for educators aiming to bridge the gap between AI’s theoretical potential and practical classroom implementation.
The project was supported by the UTA Libraries Open Initiatives Grant program. The book editors recruited chapter authors from across UTA, ensuring representation from a diverse range of disciplines. Each chapter also underwent a peer review process by each of the editors.
In Bill Gates most recent blog post he talks about his recent experience with Khanmigo, an AI-powered tutor and teacher support tool. It is a collaborative effort between Khan Academy and the owl at DuoLingo.
In May, I had the chance to visit the First Avenue Elementary School, where they’re pioneering the use of AI education in the classroom. The Newark School District is piloting Khanmigo, an AI-powered tutor and teacher support tool, and I couldn’t wait to see it for myself.
The video consists of an elevator music soundtrack playing while sentences in white text on a blue background slowly scroll upward. The video’s message was the product of commercial television, which was the mass audience delivered to advertisers. A recurring phrase ran through the video:
It is the consumer who is consumed. You are the product of t.v. You are delivered to the advertiser who is the customer. He consumes you.2
When I wrote a business case for Grammarly, widely used at UTA, I started thinking about us being the product in this new age of AI. Developing business cases for potential UTA use is one of the services OIT’s Innovation Architecture team provides, which I’m on.
Currently, UTA has around 529 Premium (paid) and 11,000 Free Grammarly individual licenses. I learned from discussions with Grammarly that the company uses client data differently depending on the license type. Grammarly categorizes the Free and Premium accounts as self-serv since you can sign up directly from their website.
Suppose you use a Free or Premium license. In that case, some of your data is being used to train their large language model (LLM)3, where customer data training controls (CDCT) are automatically on, which means a small amount of anonymized, disassociated information trains the large language model (LLM). Individuals can contact Grammarly to ask to turn CDCT off, and then no client data will be used to train the LLM.
Grammarly for Education (G4e) is an enterprise license where CDCT is automatically turned off, and client data is not being used to train the LLM.4 Additional features in G4e licenses support governance and heightened data security. These features give organizations insight into how their users use Grammarly, such as an analytics dashboard that shows the types of suggestions typically taken and the # of prompts used.
Using self-serve licenses of generative AI tools such as Grammarly raises concerns about non-published UTA research data that may leak out to LLMs. Further research is needed to determine if other AI vendors use the same business model as Grammarly.
When working with AI vendors, know the free and paid license models we describe. Vendors such as Box have made their AI Principles clear.
Grammarly’s security rating on BitSight is 790, at the top of the technology industry’s range. Grammarly has been in business for 15 years and scaling responsibly and securely. It works with 500k apps, including the full Microsoft suite.
As we work with AI vendors, we will ask about their security rating and make that information available to the UTA Community.
We find ourselves in a back-and-forth arms race regarding student and vendor use of AI. Awareness of how data is used on LLM for free and paid licenses is a first step toward finding a sustainable way to live with AI.
There is consensus that AI detection tools are unreliable, and we must figure out how to live with generative AI. One path is to work with vendors and establish low-risk pilots. UT Austin is starting to launch a Grammarly for Education Generative AI Pilot.
By the way, Richard Serra produces monumental Corten steel sculptures worth experiencing…by walking through them.
A trial account of Grammarly for Education generated all article versions below.
Prompt: Improve the article:
By Dana Miller, Business Architect at OIT Innovation Architecture
AI Delivers People?” by Dana Miller, a Business Architect at OIT Innovation Architecture, presents a thought-provoking exploration of the evolving relationship between technology users and the platforms they engage with, framed through the lens of a 1973 short video by artists Richard Serra and Carlota Fay Schoolman titled “Television Delivers People.” With its minimalist presentation of text scrolling over a soothing elevator music backdrop, this video delivers a potent critique of commercial television’s role in commodifying its audience for advertisers, encapsulated in the chilling assertion, “It is the consumer who is consumed.”
Drawing parallels to the current era of artificial intelligence, Miller reflects on the implications of being “the product” in the context of the widespread use of Grammarly at the University of Texas at Arlington (UTA). With UTA’s adoption of approximately 529 Premium and 11,000 Free Grammarly licenses, the conversation shifts to how these tools, emblematic of generative AI, utilize user data. Notably, the distinction between the data handling of Free and Premium individual licenses versus Grammarly for Education (G4e) licenses highlights a critical discourse on privacy, data security, and the ethical use of AI.
While Free and Premium users contribute, albeit minimally and with safeguards, to the training of Grammarly’s Large Language Models (LLMs) through Customer Data Training Controls (CDCT), G4e licenses stand apart by excluding client data from LLM training entirely. This bifurcation raises questions about the potential for unintended data leakage from non-published UTA research through self-serve licenses, prompting a call for further investigation into AI vendors’ data practices.
Miller’s narrative extends beyond data privacy concerns to encompass the broader security and ethical considerations of engaging with AI technologies, underscored by Grammarly’s impressive security rating and responsible scaling. The dialogue culminates in reflecting on the “arms race” between AI utilization and detection, advocating for informed, cautious integration of AI tools in educational settings through pilot programs like the one initiated by UT Austin.
In closing, Miller’s essay not only revisits the historical critique of media consumption but also positions the discourse within the ongoing challenge of navigating the complex landscape of AI ethics, security, and governance. Through this lens, “AI Delivers People?” resonates as a timely meditation on our role and agency in the age of AI, inviting readers to contemplate the delicate balance between technological advancement and the preservation of human values.
Prompt: Make the article sound more on-brand:
**AI and the New Paradigm of User Data**
By Dana Miller, Business Architect at OIT Innovation Architecture
In 1973, artists Richard Serra and Carlota Fay Schoolman utilized television as a medium to broadcast a thought-provoking video titled “Television Delivers People,” highlighting the consumer’s role as the actual product in commercial television delivered to advertisers. Fast forward to today, and we find ourselves in a similar discussion with AI at the forefront.
At UTA, our engagement with tools like Grammarly, widely adopted across the university, is a prime example. Our Innovation Architecture team, which I’m part of, delves into the intricacies of these technologies, evaluating their fit within our ecosystem. With Grammarly, we’ve come to understand the nuances between their Free and Premium licenses, particularly in how user data contributes to training their sophisticated algorithms. Interestingly, Grammarly differentiates its licenses by the level of data utilization, with an opt-out feature for those concerned about privacy.
Moreover, the introduction of Grammarly for Education (G4e) licenses marks a significant shift towards enhanced data protection, automatically excluding user data from algorithm training. This model, prioritizing data security and governance, offers an analytical lens into user engagement without compromising privacy.
This exploration raises important questions about the transparency and ethics of AI tools, particularly those that leverage user data to refine their models. As we navigate this terrain, engaging with AI vendors that uphold clear data usage and security principles is crucial. Grammarly’s commendable security practices and robust integration across various platforms set a benchmark in the industry.
As we forge ahead, the dynamic between student use and vendor offerings of AI tools demands our vigilant attention. Understanding the intricacies of data usage in AI and distinguishing between free and paid licenses is a critical step toward a harmonious coexistence with these technologies. Our initiative at UT Austin to pilot the Grammarly for Education Generative AI Pilot exemplifies our commitment to navigating this landscape responsibly.
In closing, the conversation around AI and user data is reminiscent of the message conveyed by Serra and Schoolman decades ago, underscoring the enduring relevance of critically examining our role within the digital ecosystem. As we continue to explore the potentials and pitfalls of AI, let us remain mindful of our position, not just as users but as stewards of our data.
Prompt: Shorten the article:
**AI: A New Era of User Data Utilization**
By Dana Miller, Business Architect at OIT Innovation Architecture
In 1973, Richard Serra and Carlota Fay Schoolman’s broadcast, “Television Delivers People,” provocatively highlighted how commercial TV turned viewers into products for advertisers, encapsulating the message “It is the consumer who is consumed.”
This notion has resurfaced with the advent of AI, prompting reflections on our role as ‘products’ in this digital age. At UTA, our exploration into Grammarly’s use unveils a nuanced approach to user data. Grammarly employs data from its Free and Premium users to refine its algorithms, albeit with measures for anonymization and an opt-out option. However, its Grammarly for Education (G4e) licenses take a stricter stance, not utilizing user data for algorithm training, offering enhanced data security and governance.
The distinction between self-serve and enterprise licenses raises essential considerations regarding data privacy and the potential exposure of sensitive information. Understanding the specifics of data usage becomes crucial as we navigate the complex landscape of AI tools. Grammarly’s high-security standards and clear differentiation in data handling practices exemplify vendor transparency.
The evolving interaction between AI tools and users underscores the need for vigilant research and responsible partnerships. UT Austin’s initiative to pilot a Grammarly for Education Generative AI Pilot represents a step toward balancing innovation with privacy. Our awareness and actions will shape a sustainable coexistence with AI technologies as we continue this journey.
Grammarly note: With G4e licenses, LLM training is always off. When off, Grammarly will not retain anonymized, de-identified, or pseudonymized user or enterprise content for product improvement. Training is on as a default for all self-service customers; this means Grammarly may retain random samples of text to help us improve the algorithms underlying our software and services. All random samples of text are de-identified, anonymized, and sanitized. Access is extremely limited and monitored to ensure data never leaves secure confines. ↩︎
Grammarly note: We have consolidated our Enterprise Data Control behaviors into a new control for educational institutions (and enterprise business customers). CDTC ensures we will not improve our product using customer data nor train on data from education-licensed customers. ↩︎
The use of Generative Artificial Intelligence (GAI) can be a powerful tool for helping members of the UTA research community be more effective, productive, and innovative in their work. At the same time, GAI can be used in a way that may result in unintended negative consequences or that are inappropriate to current academic norms. Uses of GAI in research may involve proposal preparation, progress reports, data/statistical analysis, graphic generation, etc. Many standards, regulations, and policies are being contemplated or actively developed at federal, state, or institutional levels as the use and impact of GAI evolves. This notice is to share some recent federal actions involving GAI in research along with general principles to consider in its use.
NSF recently announced in its “Notice to Research Community: Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process” that NSF reviewers are prohibited from uploading any proposal content or review records to non-approved GAI tools (they must be behind NSF’s firewall) out of concern for potential violations of confidentiality and integrity principles of the merit review process. Use of GAI in NSF proposals should be indicated in the project description. Specifically, it states: “Proposers are responsible for the accuracy and authenticity of their proposal submission in consideration for merit review, including content developed with the assistance of generative AI tools. NSF’s Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which includes fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. Generative AI tools may create these risks, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results.”
NIH has issued a Notice: The Use of Generative Artificial Intelligence Technologies is Prohibited for NIH Peer Review Process along with a set of FAQs for the Use of Generative AI in Peer Review. Although NIH specifically prohibits GAI in the peer review process, they do not prohibit the use of GAI in grant proposals. They state an author assumes the risk of using an AI tool to help write an application, noting “[…] when we receive a grant application, it is our understanding that it is the original idea proposed by the institution and their affiliated research team.” If AI generated text includes plagiarism, fabricated citations or falsified information, the NIH “will take appropriate actions to address the non-compliance.”
Referencing the use of GAI: GAI should not be listed as a co-author, but the use of Generative AI should be disclosed in papers, along with a description of the places and manners of use. Typically, such disclosures will be in a “Methods” section of the paper. See the Committee on Publication Ethics’ Authorship and AI tools webpage for more information. If you rely on GAI output, you should cite it. Good citation style recommendations have been suggested by the American Psychological Association (APA) and the Chicago Manual of Style.
General Principles to consider:
Use and develop AI tools in a manner that is ethical, transparent, and mitigates potential biases.
Use and develop AI tools in a manner that promotes institutional and research integrity, including scientific rigor and reproducibility.
Do not rely on AI tools in the stead of your own critical thinking and sound judgment.
Users of AI are responsible and accountable for any actions or outcomes that result from their use and development of AI tools.
Be alert to the potential for research misconduct (i.e., data falsification, data fabrication or plagiarism) when using and developing AI tools.
Disclose use of AI tools when appropriate or required (e.g., a journal that will accept a manuscript developed using AI, provided such use is disclosed).
Do not use AI tools when prohibited (e.g., a sponsor that does not allow use of AI for peer review).
Ensure any experimental data used in connection with an AI tool are accurate, relevant, legally obtained and, when applicable, have the consent of the individuals from whom the data were obtained.
If applicable, be cognizant to identify and protect the privacy and security of individuals when using and developing AI tools.
Do not provide or share intellectual property or confidential/sensitive data with AI tools that incorporate users’ content into their publicly accessible models.
Report any potential data breaches or confidentiality lapses involving AI tools to the appropriate UTA authority.
Make sure you can clearly explain how any AI tools you create were developed (e.g., describe the data and machine learning models or deep learning algorithms used to train a Large Language Model AI tool).
Be mindful of how sampling bias in training data and difficulties in interpreting output can be significant roadblocks for the ethical and transparent usage of AI.
Make sure any AI tools you use or develop are subject to human oversight (e.g., humans are involved in the design, development, and testing of the tool).
Subject any AI tools you develop to rigorous quality control measures (e.g., test for accuracy and reliability).
Exercise caution regarding vendor claims about AI-enabled products, as definitions of AI and how it is implemented may vary. AI-enhanced products may not always outperform non-AI alternatives.
Thank you for your consideration and diligence of above information. UTA is actively developing resources and guidance around AI that will be issued over the Spring semester.
Regards,
Jeremy Forsberg Associate Vice President for Research
UT Austin has released some new information about the acceptable use of generative AI on their campus.
Acceptable Use of ChatGPT and Similar AI Tools
With the emergence of ChatGPT, Bard and other large language model generative artificial intelligence tools, hereinafter collectively referred to as “AI Tools”, many members of our community are eager to explore their use in the university context. This advisory, which is jointly produced by the Office of Legal Affairs, University Compliance Services, Information Security Office, and the Business Contracts Office, provides guidance on how to acceptably use these AI Tools safely, without putting institutional, personal, or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.
ChatGPT is one of many generative AI tools now being used in educational contexts. We expect that new tools designed for specific purposes and applicable to different disciplinary contexts will be rapidly emerging in the near future. To that end, this resource, focused on ChatGPT, is designed to be adapted to fit different tools and pedagogical approaches. The CTL will continue to update our site with links to UT resources, News articles focusing on generative AI tools, and outlines of suggested approaches to adopting these tools.
Generative AI is used to create material through a growing number of platforms such as ChatGPT and Bard for textual output and Dall-E and Midjourney for images. These tools are trained on massive collections of materials – both public domain and copyrighted. As of fall 2023, there are several lawsuits in process related to the use of copyrighted works in training AI. The plaintiffs in the lawsuits claim the use of copyrighted works without permission is an infringement of copyright. In opposition, some legal scholars have pointed out that non-consumptive uses of copyrighted content (Google Books, HathiTrust) have been considered fair use in previous court cases. While these cases are in process, we won’t have definitive answers about whether the use of copyrighted works in AI training data is legal.
UTA currently licenses Adobe Creative Cloud for all employees. To get started go to adobe.com, click on “Sign In” in the upper right-hand corner, enter your employee email address, and click “Continue”. You should be prompted on the standard UTA login page for a password. Once you are logged in you will have access to all the tools in the Cloud suite. Just added to that suite is Adobe Express.
This is Adobe’s new collection of tools for everyone. It features handy tools for creating images, video, and documents. It also exposes Adobe AI tools like Firefly. Because these tools are licensed by UTA, any images created can be shared and used in other works per the Terms of Use. However, be careful not to upload any private or restricted data as it is a public tool.
If, after playing with Express and Firefly, you need more. Check out Photoshop, Illustrator, and Premiere — the professional tools from Adobe — which have all added AI features this year.
If you still need more or need a private sandbox to work with your own data, please contact OIT for a consultation (login required) to create your own private AI deployment for research needs.
Student licenses for Creative Cloud are available as a lab fee attached to specific courses and for purchase in the UTA store. Faculty interested in using Adobe Express or Adobe Firefly should contact academiccomputing@uta.edu to work out licensing for upcoming semesters.
What are mathematical and statistical models, and what do they mean for all of us as educators and informed citizens? Is there a gap between “Model Land” and the real world, should we worry about that mismatch, and how should we react?
In the latest event from the Pondering AI at UTA curated podcast speaker series, national podcaster and analytics leader Kimberly Nevala guided UTA faculty and staff in an in-depth discussion of these questions and issues, along with the author of “Escape from Model Land,” Dr. Erica Thompson of the London School of Economics’ Data Science Institute. Dr. Thompson shared her impactful learning from a career in building and interpreting models in areas such as climate change and during the COVID-19 epidemic. The two specialist speakers helped UTA faculty and staff to question our assumptions and to transform our thinking about what models are, how they work, and what happens when they go wrong, in a thought-provoking, humorous, and human way.
This unique series has also introduced UTA listeners this semester to Henrik Skaug Sætra, the author of “Technology and Sustainable Development,” and Kate O’Neill, who presented insights from her book, “A Future So Bright.”
UTA listeners may also wish to listen to back and future episodes of Kimberly Nevala’s podcast, “Pondering AI,” available here (or wherever you pick up your podcast content).
UTA registered participants in this series were also invited to receive a copy of these presenters’ books, and can contact series organizers for shipping details.
Mike Bechtel
Chief Futurist – Deloitte Consulting | Adjunct Professor – Notre Dame
Mike Bechtel gave a talk last week in Austin to State information technology professionals called “The Future of AI”. His talk was an enlightening contextualization of Generative AI (GenAI). He said that AI is what we call computing when we don’t understand how it works. When Big Blue beat Kasparov, AI went from something magical and futuristic to something understandable and attainable. The public psyche reframed that win as just a computational inevitability. As Mr. Bechtel put it, “we moved the goalposts”. Going back to the earliest Babbage machine, this is a pattern over time.
With GenAI, as we move through the Hype Cycle, people will recontextualize GenAI as once again understandable, attainable, and inevitable. It is in this trough of the hype cycle that we will be able to see was to harness GenAI as yet another technological tool.
He also broke computing down into three lanes, Interaction (UI), Information (data), and Computation (Moore’s Law). As advances happen on these three parallel tracks, we see the impact on society.