Pondering Models at UTA

Pondering AI banner

What are mathematical and statistical models, and what do they mean for all of us as educators and informed citizens?  Is there a gap between “Model Land” and the real world, should we worry about that mismatch, and how should we react?  

Cover from book with link to store

In the latest event from the Pondering AI at UTA curated podcast speaker series, national podcaster and analytics leader Kimberly Nevala guided UTA faculty and staff in an in-depth discussion of these questions and issues, along with the author of “Escape from Model Land,” Dr. Erica Thompson of the London School of Economics’ Data Science Institute.  Dr. Thompson shared her impactful learning from a career in building and interpreting models in areas such as climate change and during the COVID-19 epidemic.  The two specialist speakers helped UTA faculty and staff to question our assumptions and to transform our thinking about what models are, how they work, and what happens when they go wrong, in a thought-provoking, humorous, and human way.  

This unique series has also introduced UTA listeners this semester to Henrik Skaug Sætra, the author of “Technology and Sustainable Development,” and Kate O’Neill, who presented insights from her book, “A Future So Bright.” 

UTA listeners may also wish to listen to back and future episodes of Kimberly Nevala’s podcast, “Pondering AI,” available here (or wherever you pick up your podcast content).  

UTA registered participants in this series were also invited to receive a copy of these presenters’ books, and can contact series organizers for shipping details.

The Pondering AI at UTA series is organized by the office of University Analytics at UTA, with the generous assistance of the Center for Research on Teaching and Learning Excellence.

Pondering AI podcast link

Pondering AI at UTA 2023, Session 3

Pondering AI banner

A discussion series presented in three monthly sessions

Presented by UTA University Analytics and CRTLE

Hosted by: Kimberly Nevala and & Author Dr. Erica Thompson

Thursday, December 7, 2023
Noon – 12:50 p.m.
Trinity Hall, Room 205 or via Microsoft Teams (Registration Required – please use this form)

Kimberly Nevala, is a renowned strategic advisor, thought leader, and host of the popular ‘Pondering AI’ podcast. She will be joined by Dr. Erica Thompson, Associate Professor of Modeling for Decision Making at the UCL Department of Science, Technology, Engineering and Public Policy (STeAPP). 

Dr. Thompson clarifies the complexity of AI utilizing a wide range of real-world examples from diverse areas and discusses the hard work required to make good decisions in the age of AI. A light lunch will be served, and attendees will also receive a copy of Erica’s book: Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It.

Pondering AI at UTA Learning Series

Firefly 1950 sci-fi robot in deep contemplative meditation

Tamara, I wanted to update you on an idea that has been “cooking” at UA for several months now, to offer an AI-themed study circle for UTA faculty and staff this Fall, hoping to change up a bit from the usual classroom training-session format.

The study circle program we are envisioning builds around a national AI podcast that I think will “speak” especially to UTA faculty for its reflective approach to the topic.

“Pondering AI at UTA” will be a faculty-staff study circle of live discussions, virtual and in-person interactions with thought leaders and leading practitioners in the AI space, built on the thoughtful podcast of the same name: “Pondering AI,” led by SAS’s Kimberly Nevala (https://podcasts.apple.com/us/podcast/pondering-ai/id1562010816).  

We approached Ms. Navala as the host of this series nationally, and Kimberly has agreed to select three of her most thought-provoking guests for focused UTA sessions this Fall.   She will lead these three mini-interviews and discussions with UTA faculty, interacting live with her guest and the UTA participants, who will be prepared beforehand by listening to the original podcast interview and pre-reading the guest’s primary new book or work (books to supplied and mailed out to participants by University Analytics).

We are about to calendar the first session in this series, and thought that for effect we would have an in-person component for a kick-off feel, working with Ms. Nevala and her first chosen podcast guest to come to UTA—we are hoping to work with Julie to schedule this event in early October, so that you may join and we would hope you might assist us to officially kick of this exciting approach.  Each session thereafter will likely have some UTA participants in attendance on campus, while online attendance will also be available (essentially a hybrid program).  After the kick-off, Kimberly and her guests will likely be delivering their part of the live sessions online as well.

Faculty and staff from the UTA campus will enroll in the study circle during the late Summer/early Fall, allowing time for them to begin listening to the “Pondering AI” podcast and also allowing the logistics of UA ordering and shipping books to the circle participants ahead of scheduled sessions as well.

If Fall sessions prove popular and valuable, these thoughtful and intentional discussions on the mission-critical theme of AI will continue in the Spring, 2024 semester as well.

Primary program operations will be overseen at UTA/UA by Director Michael Schmid.

UA will be providing the costs of travel, materials, honoraria for guests, and other items.  I have updated Ann Cavalo several months ago on this plan and will cc: her here.  She has always been so collaborative and supportive in any UA sessions on campus!

I am also providing our chosen list of the first the podcast guests for the Fall meetings below.

Pete

Dr. Pete Smith
Chief Analytics and Data Officer
Professor, Modern Languages
Academic Partnerships Chair in Online Learning and Innovation
Distinguished Teaching Professor
University of Texas Arlington
University Analytics
http://www.uta.edu/analytics/
Modern Languages

GO BOLD | GO GLOBAL
www.uta.edu/modl
Accredited Practitioner in Organizational 
Culture (Hofstede Insights) and Intercultural Management (Hofstede)


Session 1 [In-person kickoff]

Kate O’Neill: Techno-Optimism Done Right       A Future So Bright

Kate champions strategic optimism, rejects false dichotomies (e.g. in politics or science), calls for mental clarity and agility, and anchors innovation in “meaning”… and by that she largely seems to mean ‘human’ meaning. In our space that may be student or employee focus. She made a big point about being prepared if things go well. What I’ll call a careful what you wish for, approach. Kate encourages thinking critically about implications if things go wildly right at scale, not just what might go wrong. Given her focus (she’s a linguist by background) Kate advises habituating to constant change by anchoring in what matters – human values and relationships. She argues that focusing on ‘meaning’ helps individuals and organizations power greater innovation aligned to purpose.

  1. HER BOOK: A Future So Bright: Techno-Optimism Done Right by Kate O’Neill (2015) This book argues that we should be optimistic about the future of technology. O’Neill argues that technology has the potential to solve some of the world’s most pressing problems, such as poverty, disease, and climate change. She also argues that technology can help us to create a more just and equitable world.

Session 2 [likely virtual]

Mark Coeckelbergh: AI Ethics and Philosophy   The Political Philosophy of AI  or AI Ethics

Dr. Mark Coeckelbergh contemplates the messy reality and political nature of AI, the interplay of technology with society, and the impact of AI on democracy. He is Professor and Vice Dean, Faculty of Philosophy and Education at University of Vienna. Like the first person Kimberly recommended, he takes a rather philosophical tact, but with a much more political leaning. In a nutshell, he views AI as inherently political as it changes relationships in society often in unintended ways. Political philosophy offers concepts to have more nuanced discussions about technology’s societal influence. How we imagine and talk about technology shapes what it becomes, so we need responsible development of imagination. In his view, we need permanent political institutions for long-term guidance on technologies like AI. Democracy is fragile, so we must make it resilient against anti-democratic technology tendencies.

He has two books, , but recommend we use the more recent 2020 book below:

  1. BOOK 1 – Political Philosophy of AI by Mark Coeckelbergh (2017) This book explores the ethical and political implications of artificial intelligence. Coeckelbergh argues that AI raises a number of important ethical questions, such as the question of who owns and controls AI, the question of how AI should be used, and the question of how AI will affect our society. He also argues that AI has the potential to challenge our existing political and ethical frameworks.
  2. BOOK 2 – AI Ethics by Mark Coeckelbergh (2020) This book provides an overview of the ethical issues raised by artificial intelligence. Coeckelbergh discusses a range of ethical issues, such as the use of AI in warfare, the use of AI in healthcare, and the impact of AI on employment. He also offers a number of recommendations for how to ensure that AI is used ethically.

Session 3 [virtual]

Erica Thompson: Understanding AI/Analytic Models         Escape From Model Land

[Link to the 12/7/2023 Event]

Dr Erica Thompson exposes the seductive allure of model land: a place where life is simply predictable, and all your assumptions are true. She is a Senior Policy Fellow in Ethics of Modelling and Simulation, LSE Data Science Institute. From the podcast, here’s a rough summary of Erica’s main points: Models simplify reality and make assumptions, so there is always a gap between model results and the real world. We should openly expose model uncertainty rather than hide it to avoid overconfidence in model outputs. Models inform values and policy but don’t make value judgments – those require human interpretation. Modelers should communicate relevance to the real world, not just results in “model land”. Modelers should be accountable for relating model outputs to real-world judgments and decisions.

  1. HER BOOK: Escape From Model Land by Erica Thompson (2021) This book provides a critical analysis of the use of machine learning models in society. Thompson argues that machine learning models are often used in ways that are opaque and discriminatory. She also argues that machine learning models can have unintended consequences, such as the amplification of bias and the erosion of privacy. She offers a number of recommendations for how to use machine learning models more responsibly, including the following recommendations:
    • Be transparent about the use of machine learning models.
    • Test machine learning models for bias.
    • Use machine learning models in a way that respects human rights.