6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #076: Celeste Garcia, USA-based writer
0:00
-48:14

🗣️ AISW #076: Celeste Garcia, USA-based writer

Audio interview with Seattle, USA-based writer Celeste Garcia on her stories of using AI and how she feels about AI using people's data and content (audio; 48:14)

Introduction -

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click here to read the whole post online.)

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Interview -

Karen: I am delighted to welcome Celeste Garcia from the USA as my guest today on “AI, Software, and Wetware”. Celeste, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Celeste: Well, thank you for having me. I live in Seattle, Washington. I grew up outside of Portland and came to the University of Washington and stayed. Between the opportunities and the natural beauty of Seattle, water and mountains everywhere you look, I was hooked. And I was fortunate enough to get employment at Microsoft.

Since then I started a family. I really admire women who manage career and family. I didn't think I was going to be able to do either one too successfully. So I left the workforce to raise my kids. And my kids were definitely high maintenance, so probably it was a wise decision.

My daughter's 22 and my son's 20, and they're very supportive of my work, which at this point is Substack 24/7. My Substack is called Getting Real about AI. And I tend to get pretty obsessive, but what's so interesting to me about my kids, they grew up digital, and they're so adaptable. I don't think they seem worried about AI in the same way that I might. Hopefully that can give a lot of us comfort. And sometimes I just feel like I might as well give in, and look forward to meeting my fully-embodied AI daughter or son-in-law someday.

Karen: Thanks for sharing that introduction, Celeste! What got you started on your AI journey?

Celeste: I started writing after I left the workforce. I started a blog. I think it was back in 2014 and it was really just a lighthearted blog about family and life and raising kids. And at some point, I started writing a novel and that kind of meandered for about 10 years.

Then in early 2023, I was reading an article in the Wall Street Journal about designer babies getting the perfect sperm from the MIT professor who's six feet tall and reasonably good-looking, and matching it with the eggs of a supermodel. And I said out loud over breakfast, "Well, why don't they just make AI babies?" And that started my journey in writing my second novel. I talked to my editor and she said, "Drop everything, get that written."

So that started my journey with AI. It's a speculative fiction novel, but it took a lot of research. I started to really become alarmed about AI and how quickly it was emerging. I talk about non-consensual AI, where nobody really has a choice in the matter. I also get really concerned that it really is a handful of tech bros that are completely dictating the future of humanity. And if you look at these bros, none of them are really people that I would say are morally upstanding. So that's what started this whole journey. I did manage to land an agent for my novel in February of 2024. And there's been a lot of rewriting. Not sure where that's going to lead, but I realized I had to keep researching and writing about AI.

My editor suggested I start a Substack, and I did so in November of 2024. And it was such a great decision. It's been really validating. I think when you're a writer, particularly if you're writing any kind of novel or nonfiction, you're off in your own little world and you don't get a lot of feedback other than constructive criticism from your editor and your agent. And so, to really get feedback from real human beings on Substack and validation and seeing that other people have the same concerns I have, it's been really great.

Karen: Yeah, I've been really happy to find so many people who are interested in AI ethics and care about it. I was really happy to meet you! So that was an interesting overview on how you got into AI and how you feel about it. Can you talk a little bit about your level of experience with AI and machine learning and analytics? Have you used it professionally or personally, or have you ever studied the technologies?

Celeste: At Microsoft, I was in marketing. I learned some basics about system design, but I am not a coder. Once I started delving into AI, I was able to learn very quickly with ChatGPT and Microsoft Copilot. I hope people don't think I'm against AI because I think the technology is incredible. And the people that have developed it are incredibly smart. There's so much good that can come from this. But there's obviously things that alarm me that we can talk about a little bit later.

I've been able to learn a pretty good understanding of AI from a systems perspective. My whole goal on Substack is – I know that people are feeling anxiety about AI, and definitely curious. There's such a barrage of headlines constantly about AI. And so what I'm hoping to do is sort of bridge the gap. I really like thinking about what AI means for all of us from a day-to-day perspective, and how much it's been integrated already into our lives and what it means for the future.

Coming from a less technical background, I think that actually gives me certain advantages for being able to bridge the gap. ChatGPT and Copilot are incredible teachers. I can ask any sort of question and do a deep dive and keep digging and digging until I really have an understanding, and it's such a unique experience.

At this juncture in my life, I'm not in college or even in the workforce. I don't have access to that type of learning. And it's so unique, the experience, because even if I had the funds, I don't think I could ever pay a tutor to be there for me in the ways that generative AI models are.

Karen: You mentioned that you weren't a coder. Have you ever tried writing any code with ChatGPT or Copilot?

Celeste: I've played around a little bit with Replit. Somebody who is a good coder, I think AI is just going to make them so much more efficient and effective. But I'm not really so far seeing evidence for somebody who has no coding experience to be able to just build. Some of the promises, I think, are a little bit overblown, overpromising and under-delivering, which is not surprising in the tech world. I can continue to play around with the technology. At some point when I'm actually able to really build an entire app and deploy it, I think that's when AI has really started to do its job.

Karen: That sounds good. Can you share a specific story on how you've used a tool that included AI or machine learning features? I'd like to hear your thoughts on how the AI features of those tools worked for you or didn't. Basically, what went well and what didn't go so well?

Celeste: I'm glad you're asking this question because it is with the caveat that I verify absolutely everything. And ChatGPT or Copilot, or whatever large language model I might be playing around with, I never take its word as gold. Because we all know that there's hallucination.

Recently I was doing a pretty deep dive with regard to why Anthropic has now surpassed ChatGPT or OpenAI in the enterprise. It's a pretty significant flip. And it just made me think, because when I was at Microsoft, I focused on enterprise software. At that time the rule was, if you get behind in that marketplace, it's very hard to reverse that trend. It's expensive for any company to deploy whatever type of enterprise software they're using. So I was asking GPT why Anthropic had been able to make such strides. And it was talking about it is really code friendly and it just seems to be a better tool for the enterprise. But one thing it didn't mention at all was security. I just said, "Hey, what about security? Are you kidding me? You're talking about why Anthropic has surpassed OpenAI in the enterprise, and you didn't even mention security." Which I hope to God is one of the most important things that anyone in the enterprise is looking at when they're deploying any kind of software, and particularly AI-enabled, because there's such obvious risks with AI regarding security.

I think the power of these generative AI tools, large language models, whatever you want to call them, is that it's based on natural language processing. So you can just have a discussion and it feels so natural. And by virtue of that, the learning is so much more powerful. But you can also always question, just like you would if you're talking to a human being. Even somebody who I know is way smarter than me, I still question them.

And so I think that's the power of it. I also found really early on that the source material that a large language model is pointing me to is so much better than if I just do a Google search. It points me to a lot of academic research and papers. And I read the whole darn paper. This is partly why everything I do on Substack takes me so long. I had this idea I was going to publish every week, once a week, do a post. And then, “Okay, maybe two weeks”. And now I'm publishing about once a month, depending, and I just haven't been able to hold myself to some specific publishing schedule. And I know everybody says, "Oh, that's just one of the big taboos of Substack." But I want what I'm writing to be accurate. I think it's so important with what I'm doing. If I'm going to be calling out the tech bros for egregious behaviors, I better make sure that it's right. And then I have the research and the data to back it up.

Karen: Yeah, I think it's not so important what the cadence is. If your cadence is once a month and you feel like that's a comfortable cadence, then your subscribers can expect that and be okay with it. I think it's totally fine. Plus, you don't have a paywall right now, do you?

Celeste: Well, thank you for validating my Substack habit! What's funny is I really feel like this has become a 24x7 operation. My whole goal is reach. My advice to anybody who's writing, I say, "Get yourself up on Substack because that's where the writers are. It's where people who actually want to read are." You're not going to find that on TikTok and Instagram. So it's a super important platform for writers.

Of course, the platform wants you to have a paywall because that's their business model and how they make money. But I feel right now that, because I believe so much in what I'm writing about, I just want to reach as many people as possible. My overriding goal is reach. And at some point I might paywall.

As a reader, I hate it when I'm reading and then have to pay to get the rest of the content. I do have some subscriptions I pay for. But anybody who's active on Substack, you start to go broke in a hurry if you're paying for all of the content!

Karen: Yeah, exactly. Yeah. It's a big challenge. So I haven't put up a paywall on any of my posts. I have it set up to take money, just to help fund Substack and to maybe have some bonus money to do a few extra things with. But I don't want to block anybody in other countries, where maybe they don't even take payments right now, from being able to access anything that I write. So I've been trying to keep it open as well.

Celeste: Yeah, I agree. I'm definitely not a profit center, that's for sure. I don't think most people go into writing to make a lot of money.

Karen: Yeah, I think most of us who are there, right, it's because of passion, because we care about the subject. And you obviously care quite a bit about AI and ethics, and I'm looking forward to talking with you some more about that.

So you've talked about the tools that you use. You mentioned ChatGPT and OpenAI. You mentioned Anthropic. I'm curious if you've tried Claude, because it's supposed to be doing a better job of providing citations now.

Celeste: I feel like I should have every single one of the models. But I can only afford so much. I know some people have a few cheats so that they can work with the free aspects of it. But yeah, that has been a challenge for me.

Karen: Yeah, I think it's a challenge for everybody, especially with some of the recent price increases that we've seen. I'll have to look for the name of the tool, but there was one tool that said "You pay us your monthly fee and then you can use any of these different major platforms or foundation models that you like". And it ends up costing you about the same, but now you get your choice. I thought that was an interesting business model to be going after.

Celeste: Yeah, and this is just so huge because you think about ways for writers to share ideas for that. I do respect that these companies are obviously businesses and they do need to make money. We know they're all spending billions of dollars. And I really am grateful that they're not advertising yet. Anything that the large language models are generating, can you imagine if it's suddenly cluttered with advertising? So it's a trade off, but I really hope they don't at some point go to that model. I think it's probably inevitable that one of the companies, or maybe all of them, they'll have versions where you can get it for free if you put up with advertising. I really wouldn't be shocked if that's just around the corner.

Karen: I'm so grateful that Substack doesn't have ads. It's just such a nicer reading experience. But I think the thing that I would be concerned about with the models is not so much that they put blatant ads in front of my face, but that they would take money from sponsors to promote their content in what it generates. And that would be pretty insidious because we wouldn't even necessarily know that what we got was based on what a sponsor wanted to be said.

Celeste: Yeah, I agree a thousand percent. And that actually is a good segue into bias in general. And I really thank you. I think that you, of anyone, have gotten me a lot more focused on the dearth of women in AI in all positions. You maybe saw a note I posted, and the numbers are just astonishing. The ratios of men to women in AI, 70:30 AI professionals, 80:20 AI researchers, and 90:10 CEOs at top AI / tech firms. Just yesterday, I searched “women in AI”. And what was really crazy was how few, anything regarding women in AI popped up on YouTube. I did manage to find, at the World Economic Forum, they hosted a women in AI breakout session. And in the audience, I will tell you, there were very few men, which freaked me out completely. But the mic drop moment is that the World Economic Forum recently did a study. And it will take 123 years to reach gender parity in AI at the current rate of change.

Karen: I guess I'm not surprised to hear it, but it's disappointing to realize that it’s that slow. I think some countries are more equitable than others. I remember when I was working at a large multinational company and we would be getting visitors, executives from Sweden and, "Wow, there's women". Whereas in the US we definitely didn't have any in our group ever, the whole time that I was there. So it was quite stark, the differences between different countries and cultures.

Celeste: You know what's really crazy is, at Microsoft, we were always talking about getting more women in tech. And it's sad to see that it hasn't changed that much. But I think your point is so well taken, because why would we not look at countries that have higher percentages of women involved in tech and find out what they're doing? And one of the most obvious things, when you say Sweden, is that they have childcare that is absolutely outstanding, right? So I think that you can see that frees up a lot of women.

Karen: Yeah, absolutely. The question is why don't companies do it? And I've come back to this a couple of other times in the past, but basically, most companies will do what they are rewarded for doing, rewarded financially or in indirect ways like reputation. But basically that's where the eight figure tech bros are coming in. They are driving the companies in the direction that helps their wealth.

Celeste: I completely forgot – you just sparked this tiny little place in my brain, that when I was at Microsoft, they were talking about onsite childcare. We can't be naive that profit and obviously market cap is the driver of everything. What kills me is that these giant technology companies are worth trillions of dollars. And right now, if you look at what Mark Zuckerberg is doing – and I've been very critical of Zuck – he's paying unbelievable amounts of money. He's buying talent. And we're talking a hundred million to billions of dollars for the talent. And you just think, gosh, a small percentage of that–

Karen: One thing that I was thinking was about how, in the long term, or even the medium term, companies will do better by integrating more women. It pays. And, you know, a lot of the features that women want, like childcare, there are men that want that too. And so it benefits everybody. They're just so shortsighted that they don't see that and act accordingly.

Celeste: Yeah, and it's just so frustrating because it's all short term. I think that tech has always been, you know, "Move fast and break things". Or, I like to say "Move fast and bake things". But it's gotten so much worse because of all the hype about AI. And then obviously this full-blown AI arms race and all the rhetoric. There's a certain amount that I agree with. I do think he who gets to ASI first will rule the world. And maybe I've just bought into the hype, but I do think that there's something to that.

And when I pull up the top 10 most influential women, Dr. Fei-Fei Li is obviously a household name to anybody who pays attention to AI. And then I think Mira Murati, because she was high up at OpenAI and then was interim CEO during all the craziness. But other than that, I don't think that people can really name women in AI. And it's kind of shocking.

So I was looking into some of these women that have incredibly prominent roles. The CEO of AMD, Lisa Su, she's been there since 2014 and turned that company around. I think she was Time Woman of the year. But again, you just don't hear about her. And if you go into ChatGPT and just do a straight-up search for the top 10 influencers in AI, they're all men. So it is alarming.

I looked at who the top women are that are influencers. There's been so many and a lot of these women, they've been kind of erased. I was looking at some different sources and all of these women, nine of the 10 discussed, front and center, safety and diversity and bringing in more voices. And I just thought, “Wow, aside from Dario Amodei and Demis Hassabis, you're not hearing any of the top tech bros talk about any of that.” From the obvious perspective of the systems designers, if they're all men, they're going to influence what's important and what they see, and it's not like they're necessarily even doing it on purpose. So just from this perspective of getting more women and people of color and people from diverse backgrounds, it's going to make such a difference when we think about how much these large language models and generative AI is going to influence how everyone is looking at the world.

And it's so important from that perspective, but also just from the perspective that the women who are influential, it's really top of mind for them to have safety in mind, and ethics. It's such a stark difference. And I don't mean that men don't think that. Of course they do. But I do think that the men that are the most influential, aside from Amodei and Demis Hassabis, some of them are giving it lip service, but I'm 1 million percent sure that Mark Zuckerberg and Sam Altman aren't staying up at night thinking about how they can put ethics in, guardrails, responsible AI, safety, and diverse voices into it.

Karen: Yeah, and there are so many ways the low numbers of women play in. One is bias. You mentioned biases. Most biases are unconscious. You know, we grow up with them. It's like the water we swim in and we don't necessarily see them until something pulls us out of the water and then we go, wow. And once you've seen it, it's hard to unsee.

And that's the good thing. I think people gradually become more aware of it. But there are so many ways in which the absence of women and people of color and representation for the Global South play in, that women appear to be more aware of it or more proactive about it.

Like Dr. Joy Buolamwini. I don't know if you've heard of her? Her name should be a household name. And it probably isn't yet. Hopefully it will be someday soon. When you get your top 10 list of overlooked women in AI, I would love to read that article and promote it.

Celeste: Yeah. Oh my gosh, there's just so much I want to do 'cause I love this topic. I think right now anybody following me knows that I have gone really, really far down the rat hole of the alarming labor exploitation in AI from big Tech. And it circles back to a man named Alexandr Wang. He's become kind of the 'it boy' of tech right now. And basically, Mark Zuckerberg bought him for $14.3 billion and it was a 49% acquisition of his company Scale AI. But it was as much an acquisition of that company as it was just buying Alexandr. I wrote my first installment. I'm not an investigative reporter, but I'm pretty much living the life of one because there's so, so much there. He's obviously very brilliant, he's the son of two nuclear physicists. He's got the cachet of ‘MIT dropout’. I joke that that's the best resume builder, that you're so smart that you can drop out of the most elite colleges.

But people are just falling all over themselves about Alexandr Wang. And I've not heard one person in any of the interviews press him on the labor exploitation. He didn't invent it, but he's built an ecosystem around it. And I believe that partly Mark Zuckerberg was interested in this ecosystem.

And so if people aren't familiar with any of the large language models, it's all about data and they've already scraped all of the data that's on the internet. And they continue to. And they've already raided a lot of copyrighted materials, and that's a whole 'nother topic. But when you hear him talk about his origin story, it's just all about him. And he never mentions that he started the company with a woman named Lucy Guo, and she's basically been erased. They started the company I think in 2016 and she left two years later out of differences of opinions. I haven't really found any information about exactly what that was. But again, it's so interesting because they were both Y Combinator, and she was handpicked by Peter Thiel. And obviously, like him or not, he does know talent. And so again, it's just interesting to me how these women, a lot of times they just sort of disappear as far as their influence.

But back to the whole other alarming thing about Scale AI is that Alexandr and Lucy's vision and their genius was that they understood that what's needed is all of this data for training. And this data has to be cleaned, it has to be tagged, and it has to be made accurate. And so far the only way to do that is through human labor. And there was initially a story in the Washington Post about it, I think in 2023, that they were paying people in developing countries pennies on the dollar to look at this data and to screen it. And some of this data is completely disturbing. I mean, just imagine. Certain things from the internet, obviously pornography, they have to code it so that it doesn't come up when prompting. There were so many complaints that they would be paying people 10 bucks for eight hours of work, and it was tedious and emotionally disturbing. And then there's all these layers that Alexandr Wang built to protect not only his company's name, but all of the big tech companies that were his customers that were relying on this data. Oftentimes people, if the work that they'd done wasn't approved, they wouldn't get paid at all. And then, even more astonishing was that there was a Department of Labor investigation into the practices of Scale AI. And that was first reported in March of 2025. And then the investigation was abruptly dropped in May. And then lo and behold, Mark Zuckerberg bought 49% of Scale AI for $14.3 billion. So I don't think that was by mistake. Alexandr Wang has built a model that exploits people at scale and he gets away with it. There's probably some value in that, if not a whole lot of value in that, for Mark Zuckerberg.

Karen: Yeah, I'd be really curious, if you keep wearing your investigative reporter hat, if you ever find out why Lucy Guo left the company, and if it was anything to do with the ethics of the way that they were doing it. I'd be super curious to hear if that ever was a factor.

Celeste: Yes. And Lucy, if you're listening, please get in contact with me.

Karen: All right, so this is all a really great discussion. So I'm happy that you shared that passion for women in AI and bias and treating people fairly and not exploiting workers and everything in that, your whole way of ethics.

You've used a lot of AI tools. I'm wondering if there are any that you avoid using, or if there are certain tasks or times when you avoid using it? And if you can share an example of when and why you chose not to use AI for that?

Celeste: I think this is an interesting discussion about the cognitive dissonance I feel every day in using these tools. I turn to them so many times every day. The other thing that I've written about extensively is the egregious amount of energy it takes to power these large language models and AI of all kind and the cloud in general. What's really upsetting about the energy aspect of it is, there's these trillions of dollars again going into these data centers. And not only is it blighting countryside, but some of these small towns initially were courting these big tech data centers without realizing that after the two years of construction, all the jobs go away. And then it really doesn't take a lot of humans to run those data centers. It's very specialized people in the tech industry. So they're brought in to do those jobs.

And then what happens, the gift that keeps on giving is these data centers completely tax the power grid. And you've seen these places where their coal-burning was scheduled to be phased out. And then once these data centers come in, it's like, "Oh, wait, we need this coal-burning". I have heard the tech bros state very confidently that the data centers were using all renewable energy. And that is just so far from the truth. It's astounding. A lot of these data centers have been built in Indonesia, and the majority of the power generated in Indonesia is coal-burning.

So again, I love the technology and I don't want people to think I don't. I just think the people creating the tools and the technologies and deploying them could just maybe take some of this into consideration and not always focus on profit. Maybe that's just so completely naive.

Karen: So it sounds like the environmental impact is one of the concerns that you have, and you mentioned it creating cognitive dissonance for you. But other than maybe trying to use it a little less to avoid that, are there any other things that you don't use AI based tools for? Obviously, you're a writer. Do you use it for your writing?

Celeste: I have noticed on Substack that there's sort of this movement – I think humans really see things black and white. I think it's just human nature. Something's good or something's bad, something's right or wrong. And I think anybody who's writing about AI, we have to use the tools. Sarah Fay, she's had a good suggestion to state your AI policy on your Substack if you feel inclined.

I use it for my research exhaustively, but like I said, I never take its word for it, and I read the source material. And also, when I'm writing anything that is technical, after I've done all my research and I've amalgamated all kinds of sources, and I've worked through a lot of the concepts, I will run it by AI, ChatGPT as a final check and just say, "Hey, is this accurate?" Because I think anybody that's writing about AI knows it's infinitely complicated, and changing every second of every day. I really want to be accurate in what I'm writing.

But also, if people have read my Substack, I'm really sort of a humorist or a satirist at heart. So what's really nice, I think, for my particular style of writing is that AI is very literal. And if you've ever played around with whatever your model of choice is and asked it to write something funny, it's so bad. It's so stupid and ridiculous. So I think anybody who's reading my writing would know that it couldn't possibly be written by ChatGPT. And I'm just not even tempted to use it that way.

But with that said, people have reached out to me that are autistic. And, I think it's been a great tool for people who maybe they're more about facts and figures. And I think that it can really help them.

So for me, there's certain things that I won't do. But I also think it has to be about personal choice. I always cite ChatGPT and Copilot as my research assistants. I put in the links to all of my research. I give a shout out to ChatGPT and Copilot because I think it's important to do so.

I've played around with Midjourney and I did it as research, as far as Midjourney’s egregious, complete ripoff of artistic IP and copyright. You can put in any living artist on Midjourney. You could ask “in the style of X artist” and it will spit out anything.I would never, ever, ever use any of the generative AI in that way.

Karen: You mentioned writing down an AI policy. That's one thing that I definitely encourage my followers and subscribers to do, and I talk about it fairly often. And I made my own policy last year. I know not a lot of people write that down. Policy sounds very scary, but really it's just – the analogy I use in the book is it's like, we don't call what we do when we get up in the morning a 'process', but that's what it is. We get up, we brush our teeth, we go start coffee or whatever. We have routines, and that's what a process is. And so it's really not a scary thing. And the same thing with a policy. Whether we've written it down or not, we all have a policy. About what we do, and what decisions we've made, and which tools we've chosen to use and which ones we haven't, and when we use them and why, and for what kinds of tasks.

So I think just writing that down, being transparent – that's one thing I've heard from a lot of people is they just want to know, so that it can help them make a more informed decision. Maybe get some insight on what tools they could be using or not using for different things. So I think that's definitely something that I would encourage is people writing down their policies and being transparent about what they use and why. I love when I see people actually disclosing that.

Celeste: Yeah. I might have to use yours as my template, if you don't mind? Sometimes I've been a little bit like, I wish people wouldn't get so worked up in it. And the last thing I ever want Substack to become is this witch hunt of people saying, "Oh, that's AI" and "They use the M dash", which I think is the dumbest thing ever. Because there's a lot of defenders of the M dash and it's way cooler than parentheses. In the end, maybe there was a rebellious part of me that was like, "I'm not going to even post that and I just am going to give some cred to AI models of choice." But I think that it's probably nice for people, any reader on Substack, because I know it is a concern. And these models are getting so much more sophisticated, right? Where it's not as clear whether you're looking at something that was purely generated by AI.

The good news is that I think people still really care. In the end, I do think that one of the biggest things that we can all do, when it feels helpless and hopeless because the tech bros have all the money and the power, is just elevating human-made anything. Elevate art. Elevate the written word, the written language that's created by humans. I think sometimes it gets really heavy and really dark. And I think that AI is an incredible tool. I liken it to Star Wars. I'll never forget seeing Star Wars, the original Star Wars, in the movie theater when I was a kid. Kind of life-changing. And AI really is The Force, right? You can use it for tremendous good, or you can use it for tremendous evil.

I have no doubt in my mind that AI and some of the stuff that Demis Hassabis doing at Deep Mind. And as far as being able to really advance all kinds of research in biotech, I really do believe it will cure cancer and all kinds of insidious diseases. It can solve food insecurity and poverty and climate change. Obviously right now the tech bros act very carelessly. But I do think once that they get involved in solving a problem, I think that is a really good thing. Their desire and the insatiable thirst for energy right now–hopefully they don't destroy the environment in the process–to get those minds involved, I think that there's a whole new perspective on nuclear power. And if you see any of the new nuclear technologies, I mean it's basically the size of the Sub-Zero refrigerator, some of these nuclear reactors, and they figured out such amazing efficiencies. I think that there's so much potential.

I get they can't slow down, right? But all they have to do is put more resources. They have right now an infinite amount of resources and it's very obvious when you look at their priorities. Alexandr Wang needs 14.3 billion, and he is the king of exploitation. You look at their priorities right now. And if they would just siphon off a little bit of that money to being more responsible about how they're treating their labor. Like all I want from Alexandr is to just pay restitution. There are 240,000 people and that number's growing every day of people that just his company alone, Scale AI, had working on the platform to clean the data. Get it deployment ready, or at least it's basically the first step on the route to deployment ready. Then after humans are involved, there is a lot that's automated through AI. But it just would be nothing. It is a rounding error in his fortune. It's a rounding error in these technology companies that are worth trillions of dollars to just pay fair wages, pay the blatant copyright infringement, which we haven't really gotten into very much on that.

I'm old enough to remember when there was Napster and when there was this whole digital rights. How are we going to figure out, once digital music came online, how to compensate artists? They figured that out. Use the technology, use AI to figure out how to actually compensate artists.

And you've been really instrumental in this, talking about the three Cs and consent, credit, compensation. There's just no incentive for them to do the right thing. And that's just what's so frustrating to me. I think if we could get more women involved, it could make a difference. But I think we just need to hold them accountable. I'm here with this crazy 24/7 Substack operation, and I'm seeing a lot of other people on Substack really talking about this. And I think that's a start, right?

Karen: Yeah. So we've kind of drifted into question number five, which is the one about the three Cs. You know, the technology itself isn't the problem. They've been claiming that, "Well, we can't do this unless we scrape, unless we steal." And there was a new LLM that just came out where they trained it on only fully, ethically-obtained data, and they got good results. So they've blown that lie away. You can do it. It is technically possible. Even DeepSeek coming up with more efficient algorithms because they weren't able to get as many of the fast chips that they needed. It can be done. This is hard, but AI is hard. We can do both. We just need to find ways to pressure them to give us all better choices, more ethical choices so we don't have that cognitive dissonance to live with.

Celeste: I fully agree. It is such BS. It's so disingenuous. Nobody really believes that. It's silly. You absolutely can do it ethically. They just choose not to.

Karen: What do you think is the most important thing that companies would need to do to earn and to keep your trust? If you could name one thing.

Celeste: I've expressed a lot of issues. I just think that they need to clean up their acts. It comes down to ethics and safety and responsibility. Meta is a huge example. This tendency to just act and do and apologize later, and it's just empty apologies. With both labor exploitation and copyright infringement, big tech knows that they have the most power and the deepest pockets. They're choosing to fight it out in the courts. And right now they're winning. The way that they could earn our trust is to do what you just said, find ethical and responsible ways to get their data, compensate people, pay people what they are worth and deserve. It's actually so basic. It's not that hard. And they absolutely have the technologies to do it.

Karen: Yeah, absolutely. I think we need to find ways as consumers to pressure them more to do the right thing. And when we find these ethical tools, we need to support them and use them and give them some of the oxygen.

So I think that, yeah, we could go on for many hours about this, I'm sure, both of us! But thank you so much for sharing all of that. So where can people find you if they want to learn more from you about these things?

Celeste: My Substack is called Getting Real about AI. My URL doesn't have my name in it, but you could just search on "Celeste Garcia Getting Real About AI". So that's where to find me and I would encourage anyone to DM me. Right now I'm looking for people who have worked for Scale AI as contractors or employees. I'd love to hear from you!

Karen: Celeste, thank you so much for joining me on this interview!

Interview References and Links

Celeste Garcia Ramberg on LinkedIn

on Substack

About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode

User's avatar