AISW #020: Anonymous4, USA-based psychotherapist and mother 📜(AI, Software, & Wetware interview)
An interview with an anonymous USA-based psychotherapist and single parent of a daughter on her stories of using AI and how she feels about how AI is using people's data and content.
Introduction - Anonymous4 interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
📜 Interview - Anonymous4
My next guest for AI Software and Wetware has chosen to be anonymous. I'm delighted to welcome you to our interview series! Thank you so much for joining me today. Please tell us about yourself, who you are, and what you do.
Hi. I live in Philadelphia, which is also the stolen land of the Lenni Lenape people. I have a daughter who is almost a teenager, and so a big part of my identity is a mother. I am raising her on my own.
And I grew up in the South, in a white upper middle class family, and with all the complexities that come with that ancestry. I also have family from the Midwest who were Mennonite, so I think that probably has impacted my family system too. I'm white and queer, pretty neurotypical, pretty able-bodied. I'm a US citizen. English is my first language.
So I come with a lot of privilege, and I try to be accountable to that in the world, and how I raise my child, and where I live, and how I build community. I do a lot of anti-oppression, anti-racism work as a white person. And I'm also a psychotherapist, which I feel very lucky to be able to do that work every day. Yeah. That's a little about me.
I think it's so interesting that you started with everything about yourself as a person, and then at the end, you were like, oh, by the way, I'm a psychotherapist.
Yeah, I’m decolonizing that. So we're not just organized by our jobs. I'm a baker. I'm a piano player. Stuff like that.
Yes. Very good.
So the purpose of this interview is to help share more voices of all people, not just folks in software development and data science and machine learning, but anyone who is being affected every day by those systems that the software and AI people are building. And AI and machine learning are really hard to avoid now. These are built into so many systems and tools that we use daily, whether we realize it or not.
So what I'd like to hear about is: what is your experience with AI and machine learning and analytics? And if you've used it professionally or personally, or if you've ever studied the technology?
Yeah. So I don't even know what machine learning or analytics means. So there's that. I've used ChatGPT a couple times to help me write a few things. I tried to look myself up, but it didn't come up with anything. I had a friend show it to me.
I've heard a lot of stories on NPR about how AI can just take 13 seconds of your voice and then make it sound like it's you, and a lot of the scams that have happened because of that.
I've thought a lot about it in terms of my child's education. And I've started listening more around how teachers are navigating high school and college students using it, and how they work with that, because they can't prevent it.
But I'm very, very ignorant to it, and I think I'm quite afraid of it, just because, like Geoffrey Hinton, the guy who was part of designing it, ended up quitting because he was too afraid of it or didn't see how it was going to be used for good. So I know that there is a lot of amazing capacity, but, like anything, it also kinda terrifies me for humanity. The fakes, the inaccuracy of it.
And I've just learned a lot about that through listening to NPR, the stories that they've told about how they've tested it and how it's come up with so much false news. But I also am just aware that people probably could have said this about the Internet 20 or 25 years ago, or whenever. And it's here, so I’d better buckle up. And my daughter will probably teach me a lot about it.
I have noticed on Google, any time I search, now the first thing is AI's response, which feels pretty new to me.
Yeah. There's actually a way you can turn off those AI overviews. A lot of people are concerned about the inaccuracies you mentioned, and there are also just the biases that come with the way a lot of the AI systems are trained.
Right!
So if you want to, there's a way to force the Google browser to not show you the AI overviews. The search ads are paid for, so there's still some bias there. But at least you’re not at risk of a hallucination where it gives you a reference that maybe doesn't even exist, or quotes someone saying something they never said.
How do you turn it off?
I'll get you the link.1
Okay. I want to do that!
In Microsoft’s browser, they separate the Search from what they call Copilot. Copilot is the feature that does the AI overview.
So, yeah, like you said, AI is very powerful, and it can do a lot of things. But more powerful tools aren't always needed, and they can do a lot more damage.
Exactly. Exactly. Yeah.
So is AI affecting your psychotherapy practice, or your patients? I've been seeing a lot of things about these new chatbots, like Replika, and “AI friends”. And then there are chatbots that are meant to be for mental health and for interacting. And seems like that could be good. It could be bad. So there's a lot emerging this year.
Yeah. There's so much new since the pandemic, like BetterHelp and GoodTherapy. You know, there's a lot of different models for even just talking to someone live that I don't necessarily think are as effective. And I've read that a lot of therapists have gone in, kind of as clients, and had their own experiences. And it's more anecdotal than scientific. But anecdotally, they were just pretty poor services that kind of scratched the surface.
And the design is more like a gym membership. You pay monthly. Some people use it a lot, and some people don't use it at all. But you get kind of this guaranteed income, which is pretty amazing as a clinician. Because usually, it evens out, but it can really come in waves. And your nervous system has to get very used to those peaks and valleys financially.
But the quality of the care is just not great. And, really, our biggest tool is our relationship. Relationships heal. And there's a reason that sitting on your couch alone does not shift you, versus sitting on my couch with me. You know?
I think there's so many different things. You know, if talking to some AI product helps someone feel less alone, I'm not opposed to it necessarily.
And some people are very neurodivergent and cannot handle being out in the world in many ways. And so being online, gaming, probably some other AI stuff, is where a lot of their inner life is. And you could argue that both ways.
I mean, I've had clients that are so isolated. Like, when I first started doing therapy, Second Life was the big thing. And there were some clients that were deluded. Like, they did not have a real life. And that's where VR really scares me. You know?
My daughter has done a little bit of virtual reality, and I've done it a little bit with her, for rides and stuff at Walt Disney. I mean, it's incredible. It's incredible. I mean, you could just live your whole life on a dragon. You know?
But that's not why we're on this planet. That's not being with the real world, to me. I mean, that's my opinion. And I think when people come to me, they are looking to be in a real relationship.
It has some good aspects. I also just think it's very deceiving.
Mm hmm.
You know, you have to get uncomfortable at some points in order to heal. And if you’re talking to a robot or texting with a bot, I don't think it's going to help you really get to the kind of healing that you're seeking, or help you ultimately feel less isolated in the world. But that's just my opinion.
Well, it's your professional and well educated opinion! So yeah.
I guess I hesitate to just blanketly write it off, because I work with a lot of neurodivergent people. And I work with a lot of neurodivergent trans people that just don't feel safe in the world. And the place that they find community is in chat rooms that are, I think, with real people, and in the gaming spaces. You know?
Yeah. That's interesting. So are you seeing in your communities or any other areas where any of these AI tools are having an impact?
My friend in Raleigh who does a lot of research, she was trained as a journalist. So she's a really good writer, very smart, very analytical, and she works at RTI now. She does medical journal editing. And she got really pulled into all the equity issues around this stuff and the language that they use. And she actually was the one that showed me how to get on ChatGPT. And so she says that they are starting to use it a lot there. And I actually don't remember how, but she was kind of excited and curious about it, which I found fascinating, as a journalist. You know? She was very drawn in by it.
I was too spooked to actually even get on it, and she's like, let me, and do you want to get on it on your phone? No. She goes, alright, I'll get it on my phone, and I'll show you. So she plugged in her name, and all this stuff came up about her. And she wasn't fazed by it. I was like, this is creepy.
And you know part of it, Karen, is that it happens in, like, a nanosecond. I think that is the most mind blowing part of it for me, how much it can do in. I can barely take a breath, and all of a sudden, it has written all this stuff. That is mind-blowing to me, that it has that capacity.
Yeah. Have you been on any websites that use a chatbot for support? Like, you go there to the customer service and you ask a bot, ask a question?
Yes. All the time. Like, when I was trying to buy a car, yeah. I can't remember specifically, but I have no problem using them.
It's creepy because they sound pretty real. They've built in all this soft skill stuff. Like, how's your day going? And I'm like, What? You're not alive - why are you asking how my day is going?
Yeah. They do it to try to make us more comfortable, I think. But it's funny because, you know, I've used some of them. The one on Substack is actually pretty good about answering, you know, why isn't this feature working with my newsletter? Well, it's because of this, and have you tried this, this, and this. It's actually helpful.
Right.
Some other bots, they’re just giving answers out of the FAQ. If I could’ve found that answer in the FAQ already, I wouldn't have bothered with the chatbot. How do I get to a human?
Exactly. Yes. Some of it is frustrating. Like, my printer wasn't working, and I tried to go on to, like, I don't know, this HP or Epson site or something. And they were like, let me help you brainstorm this. And they didn't help me. It ended up being something else.
So you mentioned playing around with ChatGPT. Can you share specifics, like, what were you trying to accomplish, and how well did it work for you? Or what worked out and what didn't work so well?
Yeah. I was working on shortening my bios in a couple different platforms - Facebook and LinkedIn, I think. And it was kinda helpful. I mean, there were times when it was using this DEI language that was way too lofty or cheesy. So I would write “make this less cheesy”, and then it would, like, try to do it differently. It was kind of fun to see what it came up with.
I've heard that you can pull up ChatGPT and then somehow pull Canva in there and then ask it to create a slide deck for you on, let's say, unconscious bias or something, and then it will do that. I've never been able to do that, but I'm fascinated by that.
How else have I used it? I have used it a couple other times.
So has it ever come up with your daughter when she's doing her homework?
No. I mean, she will Google stuff, but she has not, like, stuck anything in ChatGPT as far as I know, yet. I am waiting for it to come.
Yeah. You know, there are so many different ways that AI shows up in our lives. AI and machine learning, or ML, really are everywhere, and we don't always realize that that's what it is. So I wrote this post that listed 8 areas of life where machine learning is there and we don't realize it. I’ll share a link to it (in the End Notes). But those lists are not exhaustive. It’s just everywhere.
For example, it’s on our smartphones. And it looks to see if an incoming call is a spam call. Or we take pictures, and it automatically groups our pictures by looking at what's in the picture. And on texts, we have autocorrect and spelling and grammar checks. And on email, it’s suggesting other people to add. Even our mapping software, the way that it routes us around traffic. Machine learning is all embedded in there, and many folks just don't even realize it’s there.
Oh wow. I don't think I realized all that was AI!
Yeah. AI is a big area, and ML is one major part of it. And people use ML to look at a bunch of data, sometimes huge amounts of data. And then they analyze it and come up with models - like, a model will classify our email as spam or not spam. That's one example of machine learning.
Wow.
So when you think about using AI - well, you mentioned using Facebook and LinkedIn. You know, LinkedIn now has this box “try writing with AI”, right there where they prompt you to start a new post:
Right.
It's everywhere! And there's been a lot of fuss about Meta using our pictures and content, without our consent, for training their AI tools. So it's really prevalent.
Yeah. Right.
And there are some big questions that come up with AI. What data was it trained on? Does that data have biases? Do we even know that the AI is in there? And what does the system do with the information that is being processed by this AI? Like, when it classifies our emails as spam, is it using our new emails for further training without our consent?
So one scenario is when we're trying to use what we know to be an AI tool, like ChatGPT. And there are many other scenarios where we're just using systems that have AI built inside. In some cases, we choose to use those systems. In other cases, we don't have a choice about using those systems.
Yep. And we often don't really get to consent. We just don't think about how it's working, and that's what's happening.
Yep. Exactly. I'm one of those few people that read Terms and Conditions. I try to go through the 10, 20 pages. I don't do it for all of them, or all new versions, but I'll slog through a lot of them. And at most, it'll usually say something like, “We may use your data for improving the product.” What does that mean exactly? Is it AI? Is it generative AI? Who knows?
There's a podcast by a guy, Mark Miller, who paired up with an attorney, and Mark had this attorney go through these 10 EULAs, the End User Licensing Agreements. And the lawyer dug in and said, okay, this is what this clause means for you. It was really eye-opening. I’ll share the link. (https://eula.thesourcednetwork.com/)
There's another case where there was a generative AI tool that was being created to help lawyers do their work. This group of lawyers was evaluating this tool, and they read the terms and conditions for that tool. And it turned out that even though they didn't realize it, those terms and conditions would have exposed their clients' confidential information to the AI company.
Oh my God.
Which is a huge, HUGE no-no, right? But even the lawyers who read that set of terms and conditions didn't catch that.
So if T&Cs don't make sense for them, I don't know how much hope there is for us to really understand what we're opting into or not.
Exactly.
It really has to be on the companies to be more transparent. And I've heard people say they think it's obfuscation by design - they don't really want us to know what we're opting into. They want us to just click the box and go.
Right. Yep. Totally. Yeah.
But AI and machine learning are really everywhere. And I think a lot of the confusion, too, comes down to - when people talk about AI, one of the thoughts is that it's futuristic and scary - what we call AGI, Artificial General Intelligence, where it's like the autonomous robot from the movies that can make decisions on its own, right?
Right.
Like HAL, Ava, Samantha, and all these other characters from the movies. But that's AGI, and AGI is a very special subset of AI that isn’t really here yet.
And a lot of people, what they hear about “AI”, it’s a small subset of AI that we call “generative AI”. It's like ChatGPT where it's trained on terabytes of data, and it generates new stuff. Instead of just analyzing the old stuff, it generates new stuff. But that's really just one very small subset of AI.
Most of AI today is machine learning in various forms, or just data science or just analyzing data. It all uses data, and the concerns are on how they get the data and how data gets used.
Mm hmm. Yeah.
So it's a bit of an iceberg, I think. The genAI tools are most of the tip of the iceberg that people see and talk about. But AI is much, much bigger underneath the surface of the water.
Right. Right. Right. Yeah. So that's so wild.
Yeah. So it sounds like you haven't used AI too much yet, as far as deliberately seeking it out or using it. Are there times when you would avoid using AI, or have you not really run into that situation yet?
I mean, I would hope that my daughter would not use it to write a paper, but I think that that is going to be inevitable at some point. You know?
When would I not use it? In some ways, I feel a little like it's too late in some ways. Pictures of my daughter are on the web. Pictures of me are on the web. You know?
When I bought my house, and someone googled me, my house price was on the web, and I could not get it off. And I just didn't want clients to see how much my house cost. And that didn't happen for everyone. I don't understand why it happened.
Oh, that’s weird.
You know? So I already feel like I've lost control over a lot of my personal information. I have this [my phone], which - I feel like it tracks me constantly. Like, I was bringing up finding a financial adviser. And all of a sudden, JPMorgan was sending me advertisements for a financial adviser assessment. And I'm like, they're totally listening. I mean, I'd never gotten that before, and I'm wondering, why now? This is really creepy.
Yeah. A lot of people have these kinds of anecdotes about data leakage. Someone I just interviewed, a product manager in Canada, had this example when he was talking to a friend about Star Wars lightsaber chopsticks. A totally offline conversation. And the next thing you know, Amazon shows it to him as a recommendation. You have to wonder, how does that happen?
Right? So creepy. Stuff like that is creepy.
But I don't think it prevents me from using AI because I feel like I've crossed over, and it's just too late.
Yeah. So I’ll share examples of AI-based tools that I deliberately avoid, and why. I actually have not spent time lately using ChatGPT, Copilot, Gemini, and some of the other tools. You know I love music, but I avoid using most of the tools that generate AI-based music.
I have two main reasons why I’ve avoided most AI tools this year.
One is I don't like the idea that they put out “slop” - lots of AI-generated garbage.
Right.
The other part of it is that almost all of those tools are built on basically stolen content. Like, they ‘scraped’ all the books and articles and music and videos that were posted online, even with licensing restrictions and copyright. (Scraping means they just go out and take whatever they can get on the Internet. If it's readable on the Internet, ignore copyright, just take it.)
And they blatantly stole the content, and built the models and tools. Now they're trying to sell products on top of those models and make money. And this is a matter of principle for me. I refuse to participate in that.
I didn't even know they do that. Like, I didn't realize what they're doing, you know?
They don’t exactly advertise it!
What I look for are “ethically trained” AI tools, and there aren’t many of them yet. (Reference link: Fairly Trained has certified some - https://www.fairlytrained.org/certified-models)
You know, people have shared reasons why they use ChatGPT. One example: I've heard from people who aren’t native English speakers that use it to help them to improve their English grammar. I don't think I should be judging anyone for doing that. But I do judge the company that makes the tool.
Yes.
“AI does not choose whether to disobey copyright laws, companies and their CEOs do.” - John Phelan, Director General of ICMP, in Music Business Worldwide, 2024-07-12 (‘There is no legal or indeed moral excuse for the commercial use of music by AI companies without the prior permission of songwriters and rightholders.’)
Because they have a responsibility and choice about doing it ethically and licensing the content, and only using data that has the right Creative Commons license. And they didn't do that. They just decided, nope, we're gonna scrape it. We're gonna steal it, and by the time we get caught, it's gonna be too late.
And from an ethics perspective, that completely turned me off. So I refuse to use them for that reason. They have “colonized” the creative arts and that’s just not acceptable to me.
And I've had people say, too - “You know, if I'm going to write something, an email or a message or a LinkedIn post, I want it to sound like me. I don't want it to sound like the average of all the stuff that they scraped to write it.”
Right.
And I had another friend say she uses it, but only because she knows that it kinda gets wild and hallucinates and comes up with strange things. She uses it as a way to help with her brainstorming and creativity - that she'll say “no, no, no, no, no, maybe” to what it gives her. And then she goes off in her own direction. So she'll use it, but not for actually creating anything.
I have heard from a lot of people that have different reasons for why they do and don't use it. So I'm always curious to hear about, if people have avoided it, why they avoided it.
Right. Yeah. That's interesting. Yeah. These are things I just haven't even considered.
And the companies that do the scraping, at first, they just denied it. “Oh, no. We licensed it. We got permission.” “That's not really Scarlett Johansson's voice.” They denied it. Or they hedged and said they used data that was “publicly available” - which does not mean it was public domain and free to use. There are over 30 active major lawsuits on AI in the US right now.
Wow.
And finally, in some of the lawsuits, it’s coming out. Now they’ve stopped claiming they didn’t scrape, and they're saying, Well, of course, we scraped the data. We couldn't do this any other way. It wouldn't work. We wouldn’t have enough data. It wouldn't make sense financially. (Reference link: https://www.linkedin.com/posts/lmiller-ethicist_sam-altman-indicated-its-impossible-to-create-activity-7236896767932309504-KbuP) Okay, then maybe you shouldn't be doing it!
Exactly. Exactly.
But that's where I think the big legal battles are coming in now. They're saying, oh, it's fair use. Well, that's not fair use.
Yeah. Yeah.
Anyway, I've been focusing hard this year on understanding the ethics around AI. I'm learning a lot by looking into this. And part of it is, I don't want AI to harm anyone who's affected by it, especially when we start thinking and talking about bias.
Did you know the TSA is now using AI in our airports?
No!
Instead of a person looking at your face and comparing it to your ID, they take your picture, and they're using machine learning. They're using AI to compare the photo they take of you to your ID. There’s very tiny print on the bottom of the TSA signs that says you can opt out. It takes a lot of courage and time to opt out and put yourself through that, though, right?
Totally, totally - yes.
But then think about it. AI models are going to work better on some appearances than others. If someone has an androgynous appearance, darker skin, they cut their hair, it doesn't match exactly …
Right. Or they're nonbinary or trans or - yeah.
Exactly! So AI bias comes up in a lot of different places. I feel like I've learned a lot about that, but it's probably like an iceberg too. There's probably still a lot more under the waterline that I haven't learned yet.
It's so easy to be enamored by AI. “Oh, I saw this really cool demo. Wow. That's really neat stuff.” And it is neat, and it is powerful, but there's a downside to it too. And I think that we need to really understand how AI is affecting people. That's one of the reasons I wanted to do these interviews, and to get their views out, because there are perspectives that we’re missing.
Totally. Whose voices are we not hearing from, too? You know? Yeah.
Yeah! Exactly. And we need to raise our expectations of the companies using AI that everyone be treated fairly and with respect. We’ve put information into systems like LinkedIn, and we're expecting them to share that data only with the people we connect to, right?
Mm hmm.
The big fuss this week [Sept. 17-18] - did you see it on LinkedIn yet? They've automatically opted us in to allowing them to use our data and our content, whatever we post - and apparently this includes our DMs, which is pretty freaky.
What?!
They can use it all for their AI, unless you go into your settings and opt out. We've been opted in by default, which is completely unethical. I just discovered today [Sept. 19] that there are 2 places you have to go to opt out of them using our data for training their AI systems. I’ll share those links. (opt-out1, opt-out2, info from Ravit Dotan)
15- 20 years ago, we started putting our information into these systems, like LinkedIn and Meta, expecting them to handle it in a limited way for ethical purposes.
Right.
And now it's just being exploited. And they're saying, ok you can opt out now, but that won't take our data out of any training they've already done with it. How long have they been using it without our consent?
I've read a lot of surveys and studies, and it seems like there's pretty broad consensus around the world that ethical companies need to get Consent, and give Credit and Compensation, on data for AI. They talk about the “3 C's”. And some people throw in a 4th C for Control, which kind of goes with Consent.2
Yeah.
This should be required, right? They shouldn't be able to use your Facebook data or my LinkedIn data without getting our consent, or without giving us credit. If they quote something that you wrote, you should get credit. Right? You have to do that when you write a book.
Totally.
So that's the idea, that ethical AI tool companies - this is not sufficient, but it's a minimum - that they should get the Consent from people, people should have Control, and they should get Credit & Compensation 3, for the data that companies use for training their models. And then, if you're using a tool like Grammarly to check your grammar and spelling and help you, the company needs to treat the content that they're checking for you with the same respect. Respect is maybe a very fundamental way to put it, but that's what a lot of the conversation around the 3C's or 4C’s is about.
Mm hmm. Yeah.
One of the questions that I normally ask people is, if you've used AI-based tools, whether or not the tool providers shared where the data came from that they used to train their models. And ChatGPT, the one example you gave - they were not transparent about it at all.
Yep. Yep.
Very few people I’ve talked to have felt like the tool providers have informed them about the tools they’re trying to use. Tools like Udio to generate music, or ChatGPT, or Midjourney or some other tool for creating images. Those providers haven't been up front about where they got their data from. And it’s been a surprise, not a pleasant one, to find out where they got it. So yeah.
Yeah. It's like the ultimate plagiarism platform. You know?
Yes. Exactly! And Amazon self-publishing is being flooded by ChatGPT-written books. People writing real books are having their books plagiarized and are having a harder time breaking through with all of the slop. And more and more AI music slop is being streamed.
The other thing I hear that's interesting is that, as people publish more of these slop books, the next generation of AI is going to read that slop and train on it. So it's just going to feed on itself and get worse. We have to figure out how to keep it from devolving into total garbage, before AI loses the power that it has (and should have) to help do useful things. Even that could start to disintegrate.
Yeah.
And the other thing is, you know, when we use online test taking tools, or go through TSA, there are biometrics and photos. There are social media sign ups. We sign up for websites, and they ask for our exact birth dates. Well, I'm sorry. They don't need to know our exact birth dates.
Oh, and their phone numbers? I'm like, they lose me at that stuff. Yeah.
Yeah. And so I know some people that make up fake birth dates. Some people just refuse to use the sites. Like, nope, sorry, I'm not doing this.
Yep.
Some video streaming services ask for birthdays. Well, all they really need to know is: is this person old enough to watch adult content?
Exactly. You know?
So they really overstep, and then sell or use the data for other purposes, and that creates a lot of risk for people.
You mentioned Facebook. How did you feel about that episode over the summer, with them saying that they were going to start using people's data (I'm sure they were using it before) if we didn’t opt out by a certain deadline?
I think, what I'd said prior - like, I just feel a little powerless. Like, it's too late somehow. You know, it's like how I have my credit card in 1,000 different places, and I used to be very hesitant to do that. You know, you can't sign up for certain things without putting your credit card in, even if you're just doing the 1st month free, or you're doing the trial, or - you know?
Yeah. One of my banks offers virtual credit card numbers and I’ve been using those, or if I get a Visa gift card, I save it for that stuff. It’s definitely a pain though. And phone numbers are harder - some services won’t validate a VoIP number even if we have one.
So, I mean, I don't use Facebook as much. But it doesn't mean that they can't use all the content that's already there, right?
Unfortunately, yeah. I ended up deleting my photo albums and most of my posts before that deadline, because, if I can't stop them from using what’s there, deleting is the only way I have to keep it away from them.
Right.
And I don't know that they actually deleted it, just because they told me they did.
Exactly.
But at least I did what I felt like I could do. But, yeah, we've been pretty unregulated. The people in the EU are covered by the GDPR. So Meta had to respect their opt-outs, and this new LinkedIn opt in wasn't applied to them.
What?
Because in the EU they're protected by GDPR. But we're not, and people in other non-EU countries are not protected.
They’re lucky.
Yeah. So this is where people's trust of these companies is really dropping. You can see it in a lot of studies, but this is probably why. I think it's, as people are becoming more aware, the trust does diminish, and now what do these companies need to do to earn back our trust?
Right. Right. Exactly.
Have they lost a lot of people? Do you know? Did Facebook lose a lot of people?
That's a good question! Hmm. A lot of the people that I follow on FB, they're on Substack now. I saw one just yesterday that said, “Hey, you know, I'm getting censored a lot here on Facebook. So if you wanna make sure you get my stuff, come find me on Substack and subscribe to me there.”
I don't think FB has shared any data yet about whether they're losing people or whether fewer people are joining. But I've seen people that said, you know what? That's it. That's my last straw. I'm done with Meta. Or they just cut way back.
And with Twitter, a lot of people have moved to Bluesky. I just went there myself not too long ago. Twitter/X has just turned into this cesspool.
I don’t even know what Bluesky is. What's Bluesky?
Bluesky is a decentralized alternative to Twitter. I won't go into it here, but I'll send you some information on it. It's interesting. There was this whole flap about Brazil.
Oh, I know. I saw that. Yes.
So there's been a huge influx of Brazilian people leaving Twitter for Bluesky!
Right. Right. Right. Yeah.
And that's pretty cool.
I'm really glad that we got to talk because, you know, so many of the people that I know are software people. We come into these conversations with blinders on, because we already know all this stuff about how AI works under the hood.
And no one should have to know how AI works under the hood to use it well or safely. Just like you shouldn't have to be a mechanic to drive a car, right?
Exactly. Exactly.
So it's really great to get this perspective from you, where it’s more “Hey, I’m just out here trying to go through my daily life.”
I'm so naive to this.
Well, when companies aren’t transparent about what they’re doing with AI, no one should feel bad about being naive on how it’s being used and how our data is being used. Lots of people are now trying to raise wider awareness about AI ethics, biases, and risks, and do something about it all.
For example, one of the people I interviewed earlier, Angeline Corvaglia, has been living in Italy for 30 years. She has a daughter around 9 years old, and she started this initiative called Data Girl and Friends to help her daughter become more AI literate. She does these videos on AI to help teach kids and their adults what to be aware of. Even the weather forecasts use AI, you know? It's everywhere.
Right.
And I can share some of those links as well.
That's cool. That's great.
Sure. So this has probably been more exploratory than my usual interviews so far, which is good. How do you feel about AI now that we’ve talked? Are there any thoughts that you would like to share about how AI is affecting your life? And you mentioned your daughter and thinking how AI is going to be affecting her?
I think that's where I'm probably going to have to think about it the most is her privacy. I've had to go kinda deep into Snapchat, because that has pictures, and they claim you can't save it, but you can save it.
And then with school, I'm very curious. You mentioned this too, earlier, but I'm very curious about the programs that they're going to have her use, because of her dyslexia. And what accommodations they're going to give her. Like, will they actually allow certain kids to use more ChatGPT-esque tools?
I heard on NPR this one professor that knows that young adults are going to use AI. And so they turn in the assignment, and then she red-pens it, and then they have to go back and edit it and personalize it. She asks a lot of questions, and it's just too hard - there's no way for them to take that and put it back into AI. She says that is where they're going to learn how to write that way, or have their own critical thinking skills, or create a hypothesis, or back up a thesis, or whatever they're learning, you know - make an argument about something.
And so I'm curious how, up close, what that'll look like for my daughter’s learning experience.
Did we cover everything you wanted to talk about?
I think we’ve covered everything pretty well. Thank you so much for all of your time and sharing your views!
Sure. This has been great!
Interview Reference Links
Snapchat using AI-generated images of customers’ faces in ads: https://www.404media.co/snapchat-reserves-the-right-to-use-ai-generated-images-of-your-face-in-ads/ (paywalled), 2024-09-17
“Colonizing Art”, Payal Dhar / Open Mind Magazine, 2023-06-30
“It’s time for streaming services to act on AI music”, Ed Newton-Rex / Music Business Worldwide, 2024-08-29
“As Suno and Udio admit training AI with unlicensed music, record industry says: ‘There’s nothing fair about stealing an artist’s life’s work.’”, Daniel Tencer / Music Business Worldwide, 2024-08-05
“My novel was stolen by a robot – and used to train AI without my consent”, Damian Barr / The Independent, 2023-09-27
Authors shocked to find AI ripoffs of their books being sold on Amazon”, The Guardian, 2023-09-30
Google breach of previous settlement on Bard AI (aka Gemini) copyright infringement: “French regulator fines Google $271M over generative AI copyright issue”, John Gold / CIO.com, 2024-03-20
Example of AI bias: ChatGPT writing a Python method to predict if someone would be a good scientist - the code checks if they are white and male (link1, link2)
AT&T data breach affecting 73 million people (link1, link2)
- on Substack
“Google hit with class action lawsuit over AI data scraping”, Reuters, 2023-07-11
On the “3 C’s” (consent, credit, and compensation): “Giving credit for 3Cs / 4 Cs where it’s due” (possible origin: https://www.culturalintellectualproperty.com/the-3cs)
Bluesky decentralized microblog platform
Data Girl and Friends (see AISW interview with Angeline Corvaglia for more context)
Article on TSA photo AI: https://www.vox.com/future-perfect/360952/summer-travel-airport-facial-recognition-scan
Video on TSA photo AI: “Freedom Flyers Summit: Resisting Airport Face Scans”, via the Algorithmic Justice League
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”!
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
Credits and References
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Here's one page on how to turn off the Google AI overviews - https://www.tomshardware.com/software/google-chrome/bye-bye-ai-how-to-block-googles-annoying-ai-overviews-and-just-get-search-results
3Cs and 4Cs:
Credit for the 4Cs (consent, control, credit, compensation) phrasing goes to the Algorithmic Justice League (led by Dr. Joy Buolamwini).
Credit for the original 3Cs (consent, credit, and compensation) belongs to CIPRI (Cultural Intellectual Property Rights Initiative) for their “3Cs' Rule: Consent. Credit. Compensation©.”
See Pascal’s post about asking ChatGPT to analyze this interview: https://open.substack.com/pub/p4sc4l/p/gpt-4o-ais-role-in-mental-health?r=3ht54r
I'll like up hear more you have to say on a podcast! What is a good email for you?