AISW #009: Thalia Barry, USA-based AI Advisory UX researcher 🗣️ (AI, Software, & Wetware interview)
An interview with USA-based AI Advisory UX (User Experience) Researcher Thalia Barry on her stories of using AI and how she feels about how AI is using people's data and content (audio; 19:32)
Introduction
This post is part of our 6P interview series on “AI, Software, & Wetware”. Our guests share with us their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and in our 6P external podcasts). Use these links to listen: Apple Podcasts, Spotify, Pocket Casts, Overcast.fm, YouTube Podcast, or YouTube Music.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview - Thalia Barry
I’m delighted to welcome Thalia Barry as our next guest in this 6P interview series on “AI, Software, and Wetware”. Thalia, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Yes, first, thank you so much for having me. I think these are really important conversations to have. As a quick intro, I’m an advisory UX researcher for trustworthy AI at IBM by day, and an independent artist by night.
Can you tell us a little about what it means to be a UX researcher?
Absolutely! This job’s really about listening more than speaking. It’s about looking at human behaviors, especially in the context of AI that I work in. It’s more looking at, how do people think about AI systems? What are their pain points, needs, delight points as well? And how can we learn from their experiences to build more user-friendly AI products and services?
You mentioned being an independent artist at night. Can you talk a little bit about that?
Yes, this is something that actually started my creative career, before I became a researcher in the AI space. Painting and drawing was basically everything that I would do 24 x 7, mostly for fun. Then I got my Associate’s degree in visual communications, also known as graphic design. And later I got my Bachelor’s degree in user experience design and research. Painting and drawing by night is my own experiments of playing around with different mediums, typically mixed media. So think charcoal, colored pencils, paint, and just exploring my creativity for fun.
That sounds like a great combination of skills and interests!
Thank you! It definitely keeps me on my toes.
What is your experience with AI, machine learning, and analytics? Have you used it professionally or personally, or have you studied the technology?
Yeah, for the past 3 years I’ve been part of a cross-functional team at IBM, comprised of product managers, developers, UX designers, content designers, sales and marketing - the full gamut of what’s required for developing these types of solutions. And together we’ve been building out the user experience of one of IBM’s flagship AI products, known as “Machine Learning for IBM z/OS”. We call it MLZ for short. And it’s a business-to-business type of AI.
My experience is really around, again, listening to folks like data scientists or other types of AI experts who are creating these AI models, to understand, what do they need? What are the pain points that they’re having? And basically create these new kinds of AI experiences for them.
That sounds like a really interesting, and fun, and practical project.
Yeah, it’s definitely a challenging space. A big part of AI is understanding the technical concepts, as well as how those concepts impact real people. It’s great to be part of this awesome team that helps me navigate those conversations, and definitely learning on the job as well.
That sounds great. Can you share a specific story on how you’ve used AI or machine learning? What are your thoughts on how well the AI features of those tools worked for you, or didn’t? And what went well or what didn’t go so well?
Most of my uses of AI have been on the job. So, for example, running usability tests or other types of user-facing sessions to understand what we can improve about the user flows and things like that.
But in my personal life, there’s certainly been AI encounters that I’ve had, for example, with Google’s Gemini. It seems like more and more folks these days have access to AI at their fingertips. And yet there’s still this gap I see with understanding what AI actually is, how it works, the different types of AI.
To get more specific here in my story, these AI encounters sometimes have to take a step back and realize, okay, I am engaging with an AI. And maybe it’s a simple search engine type of query where I have to realize, okay, this is an AI that is giving me a response back, as opposed to how typical search engines worked before AI was being implemented into those processes. So it’s something that I still have to really think about, where is this AI sourcing its information? Can I trust that this AI is giving me factual, relevant information about whatever question I have at the time? It’s definitely an evolving space.
If you have avoided using AI-based tools for some things (or for anything), can you share an example of when, and why you didn’t use it?
Sure! Being an artist as well as a researcher in the AI space, it can be a challenging spot to be in, because I can see both sides of the coin in this way. I think that artists definitely have a right to protect their data, and that’s one of the reasons why nowadays there is some hesitancy I personally face with putting my art out there on the internet. And I really want to understand, before I make some sort of post, where is that data being fed into? Have I given consent for how that data is being used? You know - a lot of the questions that I think a lot more people are starting to ask nowadays. Especially artists, and I’m not just talking about painters in this case - all types of artists, right? Musicians, writers.
There is some hesitancy why I typically don’t use AI in my art practice. And one of the biggest reasons why is, that’s WHY I’m an artist is the joy of the process of creating, getting my hands dirty, feeling the charcoal between my fingertips and smearing it on the paper, not knowing exactly how it’s going to result or even what I’m making. I’m in this flow state of creativity.
And also, it serves as a different type of creative specialty. For example, as a researcher, you’re focused on the user and being the voice of that user, trying to step into their world, understand how they think and what problems and goals that they may be facing. But as an artist, outside of work, you have that creative freedom to just express yourself. Maybe you are your own user in that case. I love it when artists make art just for themselves, not necessarily to sell per se, but just for the fun of it. Just to experiment with their craft.
That’s one of the reasons why I don’t think using AI for my personal art practice is helpful. But that’s definitely an area that I’m curious about - how it can help maybe in the ideation process, or other types of uses.
Yeah, I hear that a lot from creative people. And I like your description of how it’s a tactile experience and not just a computer experience, using the charcoal and such.
Absolutely - especially in such a technical world that we now live in. I think the more that we can do to go outside, get our hands dirty, experiment the world around us, and involve all of our senses as much as possible. Put the screens down from time to time and just enjoy being. I think that’s a huge part of making sure that our relationship to all sorts of technology (not just AI) still has that balance.
Makes a lot of sense. And coming back to the online world, one of the common and growing concerns - and you alluded to this earlier - is where the AI systems get the data and the content that they train on. Often they’re using data that users have put into an online system, or a portfolio where they publish online. And a lot of companies are not always transparent about how they intend to use our data when we sign up.
How do YOU feel about companies that use data and content for training their AI systems and tools? For instance, should ethical AI tool companies be required to get consent from (and compensate) the people whose data they want to use for training?
Yes, I think consent is definitely an important piece, but more so, informed consent. For example, if the company has the intentions of using consumer data in a different way, are the consumers aware of that?
To add on to this point, I think informed consent - things like data privacy, the copyright laws that already exist - these really shouldn’t be an afterthought in how AI experiences are being built. I think this should be at the forefront and considered at every single phase of how an AI system or an AI model is developed. So looking at the end-to-end AI model lifecycle is super important for these types of topics.
Great point! And you have some experience both as someone who is involved in developing an AI-based system and someone who has used AI-based tools.
As someone who has used AI-based tools (like Gemini or Bing or ChatGPT), do you have an awareness as a user of where the data that’s used for that tool came from, and whether the original creators consented to its use? (It’s not always transparent)
Yeah, I can’t speak to those specific platforms. But I think that what a lot of artists and even non-artists are experiencing nowadays is this feeling of “AI anxiety”. And I think that’s compounded by, to your point, the lack of transparency that can happen in these types of settings, as well as the lack of foundational understanding, of what I like to think of as “AI literacy”. What are the different types of AI? How does that function? Not to get super technical with it, but it is important to understand that for example, training data is a key piece of AI systems.
The way I like to think about it is: just like humans need oxygen, AI needs data. One of the things to consider there is, of course, how is that data being sourced? Is it ethically sourced? Where this data is coming from, are the people aware of that data being collected, and their rights of how that data is being collected?
There’s so many other organizations out there advocating for these types of questions and these types of conversations to happen. So I think that the more that the average person (especially creatives, like artists) know about these “AI governance principles”, the more that they can be part of this conversation too.
Yep, that sounds good. I want to go back for a minute to your work, to the extent that you can comfortably talk about it.
For the AI-based tools or systems that you’re building, can you share anything about how it’s being built, and where the data comes from, and how the training happens, and how a consumer would see or be impacted by what you’re building?
Yes. So for the listeners and readers out there, it’s important to highlight that the AI product I work on is not your average consumer-facing type of product. Meaning that the average person isn’t directly using this type of AI system. Instead, it enables AI experts such as data scientists (who are the ones that are building an AI model), and enabling them to import and deploy that model, and monitor that AI is performing as intended for business transaction workloads.
So this type of AI system, it gets connected to a business's data sources. And those can come from a variety of places. As a hypothetical example here, let’s think of the different types of transactions that people go through. Such as, Karen, when you’re going to buy groceries, and you typically use a credit card, that is one type of transaction, one piece of data that could be collected and monitored with AI.
This is more so talking about machine learning, which is a traditional type of AI. This type of AI has been around for a while now, I think dating back to perhaps the 1950’s. So this is different than the conversation around generative AI. And I think that’s an important thing to distinguish as well.
You’re right! I hear a lot of people talk about “AI” as if all it means is generative AI. And there’s so much more to it than that, and the machine learning subset that’s been around for many decades, as you pointed out. And I think that’s important to distinguish those - that machine learning has a lot of benefits. You mentioned credit cards. One of the advantages of having machine learning looking at credit card transactions is the ability to identify fraud, for instance.
Yeah, there’s definitely so many possibilities to different AI use cases out there. And for the most part, my work has revolved around the business use cases — the type of AI that your day-to-day consumer might not be fully aware of. But the type of AI has been around for a while now. We definitely live in the age of AI now. It’s all around us. And I think that’s what makes it all the more important to be aware of at least the basics of how these systems work, and how it can affect everybody around you.
As members of the public, there are certainly cases where people’s data and content have been used by an AI or machine learning based tool or system. Do you know of any cases that you could share?
Yeah. I remember recently I had to take a trip. I was just visiting some family members out of state, and I’m trying to remember the specific airline here. But it was one of those cases where you’re going through TSA. And instead of them looking at only your passport or your driver’s license, for example - some form of ID - now they have to take a picture of you. And I remember seeing the description when I was going through the process saying something like, they’re going to delete this piece of data after my ID has been verified. I don’t know if that’s true. It was also at a time where we had one of the largest infrastructure IT outages too. So at that point, I just wanted to get home, and make sure that I could go through TSA as smoothly as possible. And during that time I was thinking, man, I wish I could have maybe asked a question there to understand, do I really know if this data is going to be deleted? Or how else could this information be used that maybe I’m not aware of?
So it’s definitely questions like that, that keep me up at night. But personally, other than that, I don’t really have other examples to share. But as I do continue my research (again, more on the business-to-business type of AI and traditional type of AI), it does make me consider what other aspects of trustworthy AI are out there.
And that’s a big question in and of itself, right? What makes an AI trustworthy? There’s so many different things to consider for that question alone.
But thankfully, I’m learning that there have been resources, such as the Algorithmic Justice League, which enables people to report AI harms that they may have experienced. So I think it’s really important that more people are aware of this resource.
Yeah, I’m familiar with them, and they do great work! I’m going to add a link to the Algorithmic Justice League in the article that this interview will be published in. That’s a great resource, so thank you for bringing that up!
Of course, yeah!
You mentioned earlier about trust and what it means for companies to be trustworthy. We are seeing that public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? And do you have specific ideas on how they can do that?
From my point of view, one of the most important things that an AI company can do is take a closer look at who are the people in the room when an AI model or an AI product is being built.
Are there different disciplines, such as a mixture of user experience specialists and data scientists?
Are there people with different lived experiences and backgrounds that are working in an environment where they feel like they can ask these hard questions?
I think it would also help to create more pathways, and more open doors for people to enter this AI field, especially from a user experience perspective.
This is the main reason why, on my LinkedIn nowadays, I’m exploring with creating more beginner-friendly, entry-level content. It’s about, what exactly is AI, what are the different types, and how people can be more aware of the rights that they have in thinking about how they are affected by AI.
And also, opening the door for user experience designers and user experience researchers who want to specialize in the field of AI. Again, a big part of this job is trying to see things from another person’s point of view, to have that empathy-led practice to understanding user behavior and understanding their pain points, goals, and problems. And often times, being the person in the room to ask those questions of, are we thinking about different types of users, and what’s often called “edge cases”, for example, in the industry.
And I think UX researchers especially are very qualified to be that person in the room, ask those hard questions, and again, hopefully uplift other people to enter this field as well.
Those are all great points, and especially when you think about bias in AI and having people with a diversity of backgrounds who are involved and can ask those questions. That’s super important. So thank you so much for sharing that.
Is there anything else that you’d like to share with our audience?
Yes, if they’d like to follow me on LinkedIn - again, this is where I post more easy-to-understand AI education in layman’s terms. And you can see some of my upcoming illustrations on my LinkedIn as well, to again, help to simplify this age of AI that we now live in.
Sounds great, Thalia! Thank you so much! I appreciate you joining our interview series. It’s been great learning about what you’re doing with AI and machine learning, and how you’re deciding when to use your human intelligence for some things. Best of luck in your quest to make AI easier for people to understand!
Thank you so much for having me!
You’re welcome 🙂
Final Thoughts
Here are links to Thalia Barry’s online profiles, where you can follow her and the content she’s creating to make AI easier for people to understand and ask good questions about, and to the Algorithmic Justice League, which Thalia mentioned in the interview.
➡️ Thalia Barry on LinkedIn
➡️ Thalia Barry’s portfolio
➡️ Algorithmic Justice League: Unmasking Harms and Biases (Report harms OR triumphs here)
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
Credits
Audio Sound Effect from Pixabay