AISW #016: Dr. Mary Marcel, USA-based Associate Professor 📜 (AI, Software, & Wetware interview)
An interview with Dr. Mary Marcel, Associate Professor of Information Design and Corporate Communication, on her stories of using AI and how she feels about AI using people's data and content
Introduction - Interview with Dr. Mary Marcel
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Dr. Mary Marcel
I’m delighted to welcome Dr. Mary Marcel as our next guest for “AI, Software, and Wetware”. Mary, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Well, Karen, it is totally my pleasure, and thank you so much for inviting me to be part of this really fascinating series that you're doing. I am an associate professor of experience design at Bentley University in Waltham, Massachusetts, And Bentley is a freestanding business university, so all of my work is with business students and executives. So I teach managerial communication and business ethics. I am a white woman, pronouns she/her/hers.
And I will say that Karen and I did go to high school together, many years ago. And, subsequently, I completed a bachelor's in Slavic languages, as well as a master's in rhetoric and communication studies at University of Virginia. I'm telling you this because I'm a professor and it's our line of work. And my doctorate is in rhetoric from UC Berkeley.
Thank you, Mary. It's so cool to reconnect with you all these years later about our shared interest in words and AI. 😊
Absolutely.
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
Okay. So it's a great question.
So professionally, as I was coming off of sabbatical, ChatGPT became public, and so suddenly we were kind of thrown into the deep end. So I teach writing and communication, and so I have had to learn about, and learn how to use AI tools, especially like ChatGPT, in order to understand their capabilities and limitations. And right now, my students in fields like marketing are using AI tools a lot to craft customer messaging, for example. There's also some use in quantitative fields like accounting and finance and business analytics, which are being taught at my school.
However, for my purposes, I tend to use AI tools sparingly. And my rationale is that, simply, students don't use calculators until they know how to do math. And, obviously, by the way, we know never use ChatGPT for math! But the same approach is also true for writing. And I will tell you, I've been teaching for over 40 years, and I've been teaching at Bentley for 33 years. And I have never, in that time, had any employer or any person from a company say, gosh, those brand new graduates that you send us are such good writers. They always say, “What were you doing all that time? They never learned how to write.”
So my feeling about it is: while there's a lot of use of generative AI in writing classrooms around the country and things like that, I think it's problematic. Because our students need to know the fundamentals of how to write well, I think, before they use those tools. And, obviously, the market just wants to monetize, and get as many people using these tools as possible.
The other thing is that I also teach critical thinking and problem solving. And AI at this stage is certainly not well equipped to help students' critical evaluation of either source materials or information.
I've been talking with some educators. I'm hearing that from a lot of them - that on the one hand, they want to help their students learn how to use AI effectively, but they're also seeing the other side where students are using AI tools. One gave an initial assignment for writing that she thought would be very easy, which was just: “Tell me about yourself and what you want out of this class.” It was for a philosophy class. And half the students used AI to write it. But why?!
Yeah. There's no irony at all. It's sad to me because I think, as with every new technology, we're definitely in a hype phase. Right? So it's being oversold at the moment. I mean, I'm old enough. One of the things I teach is how to do good presentations. So when we started having presentation software, you know - and I remember the main precursor to PowerPoint - Freelance, I think. So I've done dinosaur-age presentation software. And the fact of the matter is I make good money every year teaching people how to make slides and not make the mistakes that PowerPoint feeds you. But the other thing is that PowerPoint was also sold as: everybody hates to do presentations, so here's a tool. You can make pretty slides. Everything will be perfect.
And one of the biggest complaints, and I know this because I research this, in business is just day-to-day business communication is ‘death by PowerPoint’. Do you look forward to the next presentation, Karen, where you're going to have a PowerPoint presentation by someone? Because I know I don't. Right? And so we kinda get sold these solutions that really need good foundational education and lots of other things.
Plus, PowerPoint never took away anybody's stage fright. It just didn't. The message was, turn out the lights. Nobody will see you. Just show them slides, and you'll be okay. And, again, that is not a viable communication strategy. So that's my professional take on it.
At a personal level, I use Duolingo every day. I have had some conferences overseas the last couple of years, and so in Greece, in Spain, and next year in Italy. So I'm like, okay. Sure, I'll use this. And I understand that they had an AI system in place before ChatGPT called BirdBrain, and they had human translators build exercises.
They signed an agreement with ChatGPT last year so that they can have, for a premium, sort of interactive 1-on-1 conversational practice. Right? “I'm in a French cafe. I would like to order.“ And the AI chatbot does that conversation with you.
So, yeah, that's my dirty secret. I use Duolingo, which uses AI.
Yep. But the fact that it uses AI isn't necessarily a problem. The question always comes in, at least for me and for a lot of other people, is: was it fairly trained? Was the data that they used ethically sourced? Did they hire real human interpreters, or did they scrape it from the Internet? And I would have thought that Duolingo couldn't have done that, just using scraped data - because they really need people who pronounce the words correctly and use the grammar correctly. So I would have hoped that they would have hired those people. In that case, at least their own models would be fairly trained, even if they're relying somewhat on ChatGPT, which was not fairly trained.
I'm with you on that. And, personally, partly because I'm lucky enough to live in a very multilingual area and city, and I teach at a school where I can get one-on-one tutoring from a live student, I'm not going to use that feature.
I do have the sense that their initial training was done from the 700 translators that they have. So in that sense, the start-up end of it at least was good. We'll talk about other concerns about AI later in the interview. But yeah. I mean, I'm the first to admit I'm a language jockey. I love it. It feels good for my brain. I feel more confident when I'm by myself traveling in another country. I always feel like I should go in with some language knowledge. So yeah.
Yeah. I remember that you had taken German and French in high school, and I took German and Spanish. And now you're studying, I think you might have mentioned, 3 more languages?
Well, I also studied Russian and Polish in college because I was a Slavic languages major.
Oh, wow. That's very cool.
It's good stuff..
Yeah. I’ve enjoyed it too. I've had some chances to speak Spanish with colleagues who are natives from Spanish speaking countries, and I actually read a little bit of German in my career while I was working with a company based in Switzerland. And it was always fun to see just how much I remembered and brush up on it.
One interesting thing happened though when I tried learning Chinese about 18 years ago, I had to train my ear and mouth for the tones, and that was one of my biggest challenges. But, also, I would find that when my brain was trying to find a Chinese word, the equivalent Spanish or German word would come out of my mouth instead! And I've heard that this is not uncommon with people that have learned more than one non-native language after childhood. And you've now learned, what, 7?
Yeah. I mean, you're absolutely right. It does happen. And in fact, at the moment, I'm doing something that is probably one of the dumbest things you can do, which is I'm taking Italian and Spanish at the same time. And so you even have that closer. Like, is it ‘cosa’ or ‘que’ ? Is it ... you know? Absolutely. Absolutely.
And when I was first studying Russian, which was the most, from my experience, remote, new language learning. It's like - that's really not a Russian verb here. That's a German verb. So yeah. The thing I love about language learning is it's humbling. And it reminds you of how much it takes to become truly fluent. And, also, I think it gives us more empathy and compassion for everybody in this country for whom English is a second or other language, however good a job they're doing.
I often have said to my students, listen. Your English is far better than my Chinese, because my Chinese, unlike your English, is nonexistent. And, again, I can see this is one of the reasons why an AI application for foreign languages is so attractive. And I read that Duolingo was founded by 2 immigrants. So they definitely get the desire there.
Yep. So let's talk about any specific stories that you might have on how you've used AI, and what your thoughts are about how well the AI features of those tools worked for you or didn't, and what went well and what didn't go so well?
Yeah. So apart from Duolingo, which I'm inadvertently sounding like a fangirl of, I have used ChatGPT. So my students do a couple of presentations, and they do a lot of research in my class. And one of the things that occurred to me that they might be helped by ChatGPT is: we do a persuasive presentation that involves problems, causes, and solutions. And what's interesting about business students is they learn a lot of technical skills. Like, they learn accounting, and they learn finance, and this and that. But they don't always have a very well developed sense of business context, ok?
And I will say, one of the things I do least well as a professor is because I have been doing similar things for such a long time. You have to always think of your students with a beginner's mind, right? So I said to myself, okay. I clearly have a very thick understanding of a lot of the issues that we're going to talk about in the class. Students don't.
So what if we do some prompts, so that when they choose their persuasive topic, which business problem, think about it in problem-cause-solution form. Let's have them run some queries to get a chat output on, like, what are the most prevalent causes that people have found for this particular challenge, right? And what are the most prominent solutions that people are pursuing?
And so we did that in the class, and we've done it several times where they kinda do it as an exercise. And I have the students save the chat, so that they have to submit that with their bibliographies and their other deliverables. What happens is that, because there's no links backwards to the source material that the AI is reporting on, now we run into problems, ok, from my context or my perspective in terms of what I want the students to learn.
So first of all, they cannot know how accurately the AI is construing whatever source material is there. So AI might read some things as trends or whatever that are more limited in their impact. You know, it's the black box of AI I have never been happy with, and I will never be happy with.
The second thing is in business, as you know, information is time-sensitive. So I'm aware that, for example, ChatGPT did the training of its LLM in 2021. So they scraped, they illegally vacuumed up all kinds of stuff, and they did that massive training. Okay. Then they paid some smart Ethiopians poverty wages to correct the flaws in their findings, but we all know that [1]. But the problem is I don't know how much information post-2021 is even in the system. So everything that is more up to date, everything that has been discovered, everything that's been reported and published that ChatGPT, albeit illegally, would include in its statement, it's probably or possibly not there.
So my students are not automatically going to access, let's say, newer solutions or more up-to-date information. And so, in that sense, I can't recommend it, because they would do a better job a lot of times. They get more up-to-date information just doing a Google search.
And then, as I have alluded to, I teach business ethics. And I know that you and I share, I think, with a lot of other people very grave concerns about how copyrighted material, lots of things, were thrown into the kettle. And not only that, but all the user data that was collected without consent, without any information whatsoever. I teach business ethics. So I can't really defend that practice, because this whole idea of “move fast and break things”: It's cute if you're breaking things. It's not cute if you're the thing that got broken. And so I teach my students to always pay attention to the negative externalities of business processes. And sort of flagrantly illegal activity in my head kinda falls under that.
So essentially, as I've said, what I found is that my students tend to lose interest in using AI in my class because they start to see those limitations. And the need for critical thinking is never going to go away. And in reality, I think that it's always going to be a necessary skill for students to have to work with any technology.
And I think the origins of this podcast, right, the origins of your series is rooted in that notion. Right? Somebody creates a technology, but it's never a panacea. It's never a free lunch. It never does everything perfectly, and we have to be very careful about what gets broken when they're moving fast.
Yeah. The whole area of AI ethics is one that I've really latched onto heavily this year. I've been looking at it for AI in music, because of my love for music. But it's the same companies, and they're following the same practices. And one of the challenges that I see is that people say, well, if it's legal, it's ethical. No. No.
No. No. Slavery was legal too. I don't think that any enslaved person ever thought that that was ethical.
Right. So it's really challenging to see the different attitudes. And we hear so much from the big companies, especially US-based companies, and the tech bros and the loud voices. And the whole goal of this series is: I want to hear other voices, from other places, and from other perspectives that tend to get overlooked, or where the AI content is biased against them, and so the system's perpetuating the biases. So that's really where I'm trying to go with this series is to get a wide range of voices and get them heard and out there.
Well, and you've just touched on one other massive consideration, and I thank you for mentioning this. I think the most idealistic among us, when we think about good AI, we think of something like the computer on Star Trek, right? So all the information that's there has been vetted, it's accurate - you know it's correct.
And Star Trek World is about inclusive excellence, right? Everybody has enough to eat. Everybody has a place. Everybody's taken care of at a good level. Earth has become - we have solved the core problems of humanity, and so we're not treating each other like crap.
And then we have this highly trustworthy, non-black-box AI, if you will, that is the computer voice on the Enterprise. And I think the fact of the matter is right now, obviously, we don't live in that world. And every company that I visit, ok, every company representative that I see who is a black woman who has a high level title, her hair is pressed. She can't even walk into her company and expect to be treated fairly unless she uses toxic chemicals and processes to make her hair straight like mine, which is going to put her at a 50% increased risk of getting cancer in her life. All of that. And that's one nano-particle in the life of a woman of color, in a mirror, okay?
All of that is baked into every large language model (with possible exception to Duolingo where there's a lot of queer relationships and, this, that, and the other). But, it's baked in because it's in the Internet. And it wasn't so long ago that The Wall Street Journal reported, interestingly, that 80% of the traffic on the Internet was porn.
So that's what we're training from, and you're absolutely right. I mean, we have so far to go in real life toward treating people with the respect and the basic humanity that they deserve just by coming into the world, right? And the tech bros, I mean, I'll just mention this, but it shouldn't surprise anybody that these guys are white guys. Elon Musk grew up in South Africa during apartheid, and apparently learned nothing about racial justice, looking at the issues with racial discrimination in his factory in California.
I have a woman friend from Boston. She has gotten many, many patents. She's a super brilliant woman in tech. And she has had to sue for patent infringement so many times. And basically, if she doesn't use gel, her hair looks like Einstein in the courtroom. But nobody's going to look at her and say, oh, she's a female Einstein. They're going to look at her and say, what a weirdo, right? And there's no self-correcting among those guys that are doing this stuff, I mean … Sorry. Getting venture capital to men and women of color and white women is one of the things I'm passionate about.
And that’s an awesome passion to have!
So, anyway, just to say, everything that's baked in is baked-in, and it's like the amount of plastic that's entering our brains. It's not good. Alright. I'll stop ranting. Go back to the point.
It's all good! We'll go back to the questions. 🙂
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Yeah. So first thing is on the writing front, I have a doctorate in rhetoric from UC Berkeley. So if I can't write good stuff, I should change careers, because this is my profession. I find there's no advantage for me to use this in writing, because I have my own voice, and it's what I do.
But one of the other reasons that I avoid using AI as much as I can is because there are truly terrifying climate implications. Right now, Bitcoin uses 1% of the world's electricity, which is equivalent to the annual use of a country like Austria or the Netherlands, ok? AI is going to cause a jump in world electricity demand. And what's very disheartening is that Google, as a result of that, has walked back its climate commitments. Lots of these companies just suddenly have amnesia.
And when they are talking about it, Bill Gates said, oh, no big deal. Right? Other companies are saying, oh, well, we'll just build small nuclear reactors. And it's like, right, because having more nuclear waste that's going to be around poisoning things for the next 30,000 years - excellent, great idea, so that you can make good Q2 earnings in 2025. I've read, I've looked at what folks are talking about, and I think you're aware too, Karen, there's definitely ways that we can train models in more lean ways, and there's ways that we can handle data and things like that, that aren't so carbon-intensive.
But, unfortunately, what I also hear and read from these researchers is that this stuff isn't being done. It's not being scaled. And, honestly, building all the servers in Iceland is just rude to the people of Iceland. I mean, I love Iceland. I don't want more stuff to melt in Iceland. You know, if you're smart enough to think about how to put something on Mars, presumably, you would be smart enough to figure out how to do this. Right?
And so there's nothing at the moment. The marginal advantage for me to use any of these tools (with the exception of Duolingo, which I confess, but I don't use the ChatGPT level) is nil — it's negative.
Listen. My husband and I have a healthy retirement savings. We've decided that we are going to build a net zero solar-powered house. We're going to start building next year, and that's our commitment to helping to de-carbonize the future. So, why in anybody's name would I suck away the benefits I'm trying to give to the planet by doing that, by using AI for stuff that frankly, with my years and my experience, I can definitely do better than AI? So, that's the other big thing.
Yeah. You had made a good point about not using AI to summarize research articles by other people because of the concern about the copyrights. And I think a lot of people don't even think about that, because the tools don't exactly go out of their way to say hey, make sure that what you're about to upload here is ok.
You're absolutely right. I teach a summary assignment in managerial communication, because people in business want students to know how to do that. But the fact of the matter is: it is illegal to upload copyrighted material, and my school does make a point of educating us about that. And so, for the same reason, I do research on stuff that potentially was behind a paywall when data for this large language model was scraped. So I cannot legally use AI to work on that material.
But, nevertheless, if I did upload something, it's the same as Napster [2]. If you remember the kid at Northeastern University, it's like, “oh, yeah, let's just download all this music for free”. And I tell my students the story. And you know what happened? People got prosecuted. That's what happened. And it's just unethical.
I'm also an author, and you raised a really good point on publishers now talking about making agreements, so that their content can be available to these large language models or platforms assuming, I guess, they think it's going to be profitable. The question is, are they going to pay royalties to their authors to do that? I checked to see if my publisher does it, and my publisher has actually come out very publicly saying AI companies should pay for this content. So go Roman and Littlefield. Thank you.
Good. I always like to call out people that are actually doing the right thing, who are trying to avoid the questionable data practices.
Anyway, as we were kind of chatting, I appreciated you calling that to my attention because, like, oh, crap. I should have thought that this is happening. But, of course, anywhere somebody can make an easy buck, they're going to do it.
Yeah. And that's what we're seeing with the big newspapers. You know, on the one hand, they're suing, but on the other hand, they're also negotiating. It's a fluid environment really. As the Chinese saying says, we are living in interesting times, and that's not necessarily a good thing.
Well, in ethics, it's funny how everybody else always wants you to behave ethically while they don't. You know, when I use past a certain point of quoting directly from somebody else's work in a book, I have to pay permissions. But, presumably, if my books get scraped up, I'm sure if I ask for a check, they would just politely tell me to bugger off.
And, so that's a weakness. Like, we're in this sort of pre-conventional stage where we haven't agreed. We haven't made new agreements about what's fair, what should be happening. And at the moment it's the classic - these companies are privatizing the gains and socializing the costs.
That's a great way to put it. I like that summary. One of my other guests pointed out that it's possible to build data centers that are closed loop and don't consume water, but they cost more. It's like 25 to 30% more cost to build a data center that way. So they don't do it.
Again this is the thing that just absolutely sets my hair on fire, right? Because Elon Musk, Sergey Brin, I mean, these guys have more money than they could spend in several lifetimes. We are on the brink of existential, catastrophic climate collapse. So, really, Jeff Bezos, you don't think that you could just invest a couple of billion here or there? Because it's going to make you money anyway. It's only a question of how much money it's going to make. Like, how much more money do you possibly need? I'm sure we're on the same page on that.
Yeah.
Aristotle, even in his time, talked about how the problem with money is that it's artificial. So it's unlike, like, hunger and thirst and fatigue and desire for sex - those are all bodily things, and we know when we're satisfied. The problem with money is there's not an automatic point in your brain that tells you "I have enough".
And I think that we have grossly under-trained people that are using and developing these really powerful technologies to have the critical capacity to say: my ego is sufficiently satisfied that I can accept making $1 billion this year as opposed to $1.1 billion, or whatever. But that's what I'm working on. That's my space.
I saw an analogy once that said if we were studying a troop of chimpanzees and we saw that one chimpanzee was hoarding all the bananas, and they were sitting in a pile and the other chimpanzees were starving, we would say, what's wrong with that one chimpanzee?
Well, I actually have that posted. That was on my last Facebook post.
Oh, funny!
I think the next thing we were going to talk about is where machine learning systems get their data.
I have drastically limited my social media presence, which has not helped me as an author, precisely because the ways that our data has been collected and used without our permission, without knowing any of the things that's going to be used for, and the ways that it has profited people that has not profited us.
I mean, I'm a firm believer in GDPR. Like, that's sort of the minimum standard of data protection. And I always try to remind people that one reason why the EU developed GDPR is that there are people that lived through fascism and communism where people's data was used against them. IBM punch cards were used to identify Jews in Europe and their property. And in places in Europe where IBM had little market share, more Jews survived. There's an excellent book about this, called "IBM and the Holocaust" by a guy named Edwin Black. That's one of the drivers for GDPR. The Europeans understand how information can be used against people. And people have experienced that. So they're like, I have a right to disappear. I have a right to know what you're collecting, and to not give my consent, right?
Wow, I did not know that about IBM. I'm going to have to read up on that.
Yeah. It's a great book. It's "IBM and the Holocaust". I'm sure you can get a good reasonably-priced used copy. It might even be in your public library. Very well researched.
So data privacy is a whole other area. It comes up a lot with the people that I'm following with ethics - ethics and privacy are so closely related. I've been trying to really follow developments worldwide, not just the GDPR and the other privacy standards - there's the AI Act which just got approved. The US is behind, and we need to catch up fast, or we need to slow down on some of the things that we're doing. Because, again, what's legal is not the same as what's ethical. And what's unethical shouldn't have to be illegal for people to do the right thing. But our laws do need to catch up.
You're absolutely right. And it's, I think, sort of a truism, right, that for a long time now, the technology has been well ahead of Congress and the regulators. I'd say, partly because people in Congress aren't necessarily technologists, but also because companies don't disclose what they're doing. And so it's hard to find out what they're doing. And by the time we find out, the damage is done, and so on, and so on. And then whatever level of cooperation and regulatory capture is happening in Congress and in agencies and things like that. From an ethical standpoint, there's a really simple rule that I apply to things, right, which is:
Does the organization or the company that I'm interacting with genuinely have my best interest at heart? And are they willing to moderate their power relative to mine so that I don't suffer?
And there's almost no company that meets that standard. But I think it's a good standard because, otherwise, either what you say is, yeah, we're going to break it, and we're going to walk away from it, and tough teacakes, right?
Or they say, “Well, but the thing is, Mary, most people are going to benefit. A few people are going to suffer, but most people are going to benefit.” And you know what? For one of the few people that's suffering, that's not a good deal. So I don't want to participate in somebody having to take on a disproportionate downside just so I can not have to write my English paper for class. Right?
And so as an ethicist, that's my position. I don't know how much business ethics they teach at Stanford Business School. I don't know. It's an interesting question.
Good question.
Yes. Anyway, that's the tip of the spear where I'm trying to be. I have to tell you, Karen, never in a million years would I have said, yes, Karen, I anticipate having a 30-some year career at a business school. Never. Never. But what has been so intriguing about being in a business school is seeing what's behind the curtain and interacting not just with the consumer end of things, but also with the knowledge of what's behind the curtain. And so in that sense, it has been kind of an extraordinary unexpected opportunity to learn, to see the world through lenses that I never anticipated, and then getting to kind of instigate conversations with students, and with faculty, and with people that we consult with, to get them to take these questions on board.
And I'd say it's an incremental process at the moment. But like many things, as my best friend says, the thing about change is it happens slow, slow, slow, slow, slow, and all of a sudden, fast.
And I think what's going to happen is - I'll put it this way. You and I, this is not our first tech rodeo, right? I remember in the earlier 2000's, my undergraduate students loved social media, loved Facebook, everything. They are more cynical about social media now. And it's not to say that they're not on TikTok, but the lens is so different. And the way that they talk about it, the recognition of how they're being exploited has totally changed.
What did Abe Lincoln say? "You can fool some of the people all the time ..." Yeah. I think what's going to potentially happen is that enough people will get to that point and say, “You know what? I'm not really benefiting from the tech revolution, and I don't want this to be happening this way. So let's get some regulation. Let's change how we do this.”
Like I say, a lot of ethical change is incremental. Sometimes it's the big paradigm shifts. My philosophy is I'm all in for both speeds in all directions, because I love this planet. I love the planet we're on. I don't want to destroy it. I don't want to let tech bros destroy it either, and I don't want to destroy it.
So I think we have a lot at stake. And I know this is probably not the direction you thought it would go in a conversation about AI, but I do think about the environment. And I think one of the problems with tech tools is it's so easy to get caught up in the power and the magic, and I feel so cool, and all the dopamine that comes from using these tech tools. And never think about, where does that magic come from?
I said this in my class the other day. It's electricity. Algorithms use electricity. Electricity is something that has a cost.
What I was just thinking about was the whole idea of the cost of the environment. It's not a cost that somebody has to pay, and it doesn't show up on their balance sheets. And when they're trying to decide whether to spend the 25% more on the data centers, it's just not even figuring in. It's like, why would we do that? Well, because you're not thinking about what you're actually costing the environment.
And when people can use AI tools for free and they create 500 images of slop with too many fingers, we wasted all those resources, but it didn't cost that person anything. So there's no connection between the cost and what little benefit there was in that case.
That's exactly it. I mean and that's also something that's different in Europe where people pay for carbon. And I think it's one of the terrible things here, which is why I'm trying to educate my students to think about this. Because you will pay a cost, despite the fact that you didn't get a credit card bill today for all of those stupid cat things that you did on ChatGPT.
If you're my age, I still have some decades that I'm looking forward to. If you have children, shame on you. I don't even understand how - you know? And who has more children than Elon Musk? He's just, like, pumping them out. Like, what do you even think your business is going to be able to do when sea levels rise, storms bust everything up, and everything is broken?
Like really? Really? You think you're still going to BE in business? Because I don't think you're even going to be in business. So what is your future horizon here? It's like I was hearing today, we've had so many battles about offshore wind in Massachusetts. But a lot of the coastal towns are like, oh, it's just going to mess up our tourism. And you just want to say, do you really expect that when the sea levels rise 3 feet, nobody has any money, and we're eating sand because we can't grow anything, do you think anybody's going to be going on vacation? Why isn't anybody putting that perspective out there? I guess I need to get busy.
Yeah. And part of it, when you talk about cost - OpenAI, at first, they were like, "oh, we're not scraping". And now the latest thing that's coming out from these lawsuits is, "Well, of course, we had to scrape. It's not economical otherwise. We could never do it." That's the point! It's a cost that you kick down the road. If you can't afford to do it, then maybe it shouldn't be done.
No. Exactly. And here's another reason that I avoid AI. Because I don't want there to be tons of AI-generated content out there that gets refolded back into the next generation of training. Because there's going to be more faux information. And, in my world, you need reliable information to make decisions. I'm in the business of generating knowledge. Right?
So I need information that is grounded in something other than an algorithm. And people are talking now about, like, soon, 40% of the information inside some of these large language models is going to be AI-generated. And that - I mean, talk about filler, less nutritionally valuable than styrofoam, and that's what they're going to be selling us. So that is the deeply disappointing thing about business.
And I never expected this. I never expected this because, as you know, I grew up on a farm, and we were definitely not at the happy end of the economy. And we went to church, and we learned basic ethics about how to treat other people. And I feel like we've gotten to the place where business acts as if the rules only apply to them when they get caught. And that's another whole problem.
But the larger issue that we're talking about is: Balzac said "Behind every great fortune is a great crime." I think about the tech fortunes and all of this data that has been swallowed up without consent, without compensation. It is the 21st century land grab. Like the fortunes that generated capitalism that came out of stolen land and stolen labor. And this century, they're coming out of stolen data. And it's arguably the same problem.
Yeah. Absolutely. I've had some people ask me, well, why are you looking at AI ethics in music? Well aside from that I love music, one reason is that I think that the legal system is actually a little better defined in the world of music, as far as who owns what and what you have to pay for a license for.
And these lawsuits are actively in progress right now - the music labels have sued Suno and Udio and OpenAI. I think there are more than 30 major lawsuits; I follow one of the people that tracks that. And it could set a good precedent, is what I'm thinking. Like, if courts say that OpenAI or Google or other companies aren't allowed to just scrape YouTube videos and train on them for music, and that gets addressed, why would it be limited to music?
So that's one of the reasons that it seems like there may be a better shot at coming to a resolution more quickly on that. My main concern is that, okay, they'll settle the lawsuits, but then how much of that money will actually get back to the musicians? The creators.
Exactly. You know, creators are kind of like farmers. And, again, it's the huge disrespect. It's the huge disrespect, the contempt even, from a lot of people who are tech business people, toward those who create the content that gives value to anything that they do. And it doesn't have to be that way. It could be a partnership. It can be something that's mutually beneficial. But like you say, there's one big pile of bananas over there with 1 monkey, and everybody else is sitting over here with the skin.
It doesn't have to be like that. But we have to make that happen.
Yes, Consent, compensation, and credit are really mandatory. 1
Absolutely. You know, it's the law in terms of copyright, and it should be the law for data. And fair is fair. Like I say, my friend who is having to sue giant tech companies just to get compensation. And they're denying that she even wrote these patents, which is ridiculous. I mean, the US Patent Office granted her these patents. And they have the temerity to infringe because they're just way better funded than she is. It's like, really? Really? Because the last time I checked, you can't eat money. So what are you planning on doing?
Or breathe it or drink it. But the funny thing about patents is, I've got some, and I remember when we were being trained on IP that if you have a patent, and you learn about infringement and don't enforce it, then you are at risk of losing it. So she would have to, as much as it sucks to have to go against companies that just keep throwing more lawyers at it and hope to wear her down. But you have to enforce your rights. You have to stake a copyright claim, even though you should own the copyright. If you don't make the claim, it's a lot harder to defend later. So a lot of aspects of intellectual property are challenging.
Yeah. In short, the system was not created for us or by us. But that's another podcast series!
Yep! So let's talk about personal data. You mentioned that you limit your social media presences, partly out of concern about how your personal data and content might be used. Do you know of any specific cases where your personal data wasn't really within your control and your information got used?
Yeah. I just recently was going to a conference, and the TSA checkpoint was taking pictures. And I will just say, even as a economically comfortable, very white-presenting woman, it's a little scary to take on the TSA. Because you don't know how they can misconstrue something you said. So I'd say that's kind of a problem because the consent, the ability to opt out, feels so constrained. Now by the same token, I don't know what the TSA does, how long they keep your images or things like that.
And I can say, I'm not aware of any situations where my personal information has been used in harmful ways. But I'm absolutely sure that my information has been taken and used in all kinds of places that I didn't consent to. The one place that I do have a presence is on LinkedIn. I have a microscopic presence on Facebook, with a very restricted number of people that can see my stuff. And LinkedIn is just a professional concession. I teach in the business side of the business school. And so it's the one thing that I'm willing to do.
But I confess, I'm in a 'limit the damage' kind of mentality 99% of the time. And, again, I know that it is counterproductive to me at times, especially now when I'm trying to get traction for my writing. I spent the month of June basically in turmoil because I finally decided, okay, I will have somebody build a website for me. Here's what I'm going to do.
But for somebody like me that thinks very deeply about words, about ethics, about politics, about communication, it feels like we've been thrown into a time much like what's happening actually around the time of the development of democracy in Athens, which coincided with writing, really becoming widespread and introduced in Greek societies. And there was a similar kind of crisis about, how can you tell what's true from what's false? What kind of ethical principles should guide people in their decisions?
I mean, the foundations of western culture were built in that time around similar challenges - that writing is a technology. And you don't have to have electricity for writing to be a technology, and for it to become a huge challenge in governance in lots of different places. And I think we're in that same kind of time. And so unfortunately, for me, like, in my field, in my formation, these kinds of upheavals, this is what we study and we cannot take lightly.
And so when it comes along as a phenomenon, it isn't just like, oh, Facebook. Oh, Instagram. Let's do this. Oh, no. This is the cluster (blank) that our generation is going to have to deal with. Many people think we're at an inflection point with this technology. The way I think about it is: there was a time when what business did, it had impacts, it had a footprint, but it didn't have the ability to jeopardize humanity in short amounts of time. Right?
And if we don't get our social contract wrapped around that properly? Hey, I'm going to be in a net zero solar-powered house living inland in the midst of a lot of farms at 1000 feet of altitude. So I'm not going to get flooded out. I'll probably be able to eat. And even if the grid goes down, I have my solar. Right? I'm taken care of. But if everybody else around me is screwed, we're all basically screwed. And if we can't figure out that that's what's going to happen unless we change things, nothing else really matters.
And AI to me is just one of the biggest unnecessary thorns right now. You know, we were making progress on climate. Imperfectly, but we are making progress.
And for somebody like Elon Musk, who started his career telling us we all need to get electric cars, which we're all excited about, those of us who are green progressives, to now be pushing - he and his fellow investors, to be pushing this other vision, it’s ... deeply disappointing. That's the nicest way I can put it. That's why we're talking about it, so that we can do what we need to do.
Yeah. AI has been, I think, a real catalyst, speeding up a reaction that we were trying to slow down. And if it's being used for truly valuable purposes, that's one thing. I wrote a story the other day about how this company in Australia has found a way to detect lung cancer tumors earlier and with more accuracy and get people into treatment. That's the kind of stuff I want to see machine learning and AI use for, not generating a bunch of garbage pictures or elevator music.
And to that end, I've seen so many articles. Karen, I can hardly tell you because I do these searches regularly. You know, if you do a search, you will easily find - and when I say a search, like, in a journal database, okay - there's so many people, so many researchers who are like, “How AI can be used to help climate change via doing this or that?” So if we're going to use AI, then to your point, let's use it for strategic, high impact things. Right? Not just to generate dollars over stupid stuff.
And, again, why don't we just invest massively in renewable energy? Why don't we do something important about plastic? Because if we got plastic under control, that would be a huge win for climate. And fish would love it too. Our brains would love it because I don't really want to think about how much plastic is floating in my personal biome.
Alright. Let's go to the final question - public distrust of the AI and tech companies and how this has been growing. And partly, it's growing because of increased awareness of just how much we're being exploited, and that awareness is a good thing. And so I see that distrust is kind of a good thing.
Yeah.
What's the one most important thing that AI or tech companies could do to actually earn and then keep your trust? And do you have specific ideas about how they could do that?
Yeah. I mean, in a certain sense, Karen, I think it's pretty simple and pretty straightforward. We have to change the contract about data so that data is no longer treated like the way the cops can treat your garbage, which is when you put it on the curb, anybody can go into it, you know?
Data has value. So companies must be required to get our consent. They must disclose the uses that they're going to make of our data. And they should compensate us. Because we're all working for the tech companies now, whether we do or not, and that is the only way to economically reset the scale. Because, to your point, the things that companies pay for matter to them. If they're not paying for it, it's a free-for-all. And, I don't mean that in a good way.
So to me, that's the number one thing. We have to change the contract. As an ethicist, one of the things that I work on is making visible the invisibilized stakeholders. Making sure that, in whatever equation we're talking about, every variable that should be in the equation is represented in the equation. And I think, to your point, as more people become more aware, they're all realizing that in reality, they are stakeholders, not just as users, but as contributors.
And so, the more that we can make visible that presence and our parts, our unwitting and unwilling partnership, and that we can make that legitimate, make it ethical, make it fair and equitable, I think that, to me, is the win. Anything short of that, I'm not interested. And I appreciate that it's going to happen incrementally, probably. I mean, we'll see.
Yeah. I think your description of it as an inflection point earlier was good, because it can be kinda slow and then it's going to hockey-stick one direction or the other.
Exactly. Exactly. It makes a big difference too because, I mean, you know and I know that there are a lot of places, including in this country, where that data that is taken from us can be weaponized against us, and is being weaponized against people. And that's also part of the hockey stick is: are we going to truly democratize what's going on? Or are we going to allow the future to just be the increasing concentration of power on one side, and we're all just serfs working for technology companies without getting paid?
Right.
And the other point that you made earlier, as far as things that the tech companies need to do … The transparency and the equitable approach to data and privacy is one thing. The other you had mentioned was, I think, the carbon, carbon and environmental impact. Those are 2 big aspects. Super important.
Well, listen. Yeah. They're big aspects, but come on. Again, it's not like these companies are starving and struggling and sitting in Silicon Valley with whale oil lamps in the cold, trying to figure out what they're going to do next. I mean, if anything, they are the most well-resourced companies on Earth. I mean, California itself has, like, the 8th largest economy in the world. In California itself, the companies themselves have the resources to do this. And so that's the other reason why they should be doing this.
Yeah. I think it's interesting to see philanthropists like Mackenzie Bezos. She's giving away her money, and she's putting it into charitable foundations. And on the other hand, Jeff goes and buys another yacht.
Yeah. No. Exactly. I read once that Steve Jobs, in his lifetime, gave away something like 10 million dollars. Somebody asked him why he didn't do more philanthropy, and he said, “Well, my products are my gift to humanity”. And my response to that is when I have to pay $2,000 for a MAC, it doesn't really feel like a gift. Not so much a gift.
And it's for the rest of us to demand that these business guys act like they're part of society. You know? They're not above the law. They're not below the law. But they're here. And they're with us, and we are with them, and respect is demanded.
That's very good. Well, I think we've covered all the questions. Is there anything else that you'd like to share with our audience? Maybe talk a little bit more about your book?
Well, I just want to say, Karen, I absolutely love that you're doing this series. And I have such respect for the ways that you have reached out to lots of different people doing different things in different places to really validate the concept of this series. So it feels like you're really living, you're walking your talk here, in terms of who you're bringing in the conversation.
And that has a lot to do, actually, with my latest book called "The Architecture of Blame: The End of Victimage, and the Beginning of Justice". And that book has to do with, in a lot of ways, what we've just been talking about, which is addressing that big imbalance that has been operating for a long time: in society, in religion, in economics, in markets, in politics where, rather than taking responsibility for the messes we make, we identify the most vulnerable folks that we can sort of parlay our guilt onto and punish them, destroy them, ruin their life chances, so that we can walk away and continue to do the stuff that we're doing.
And it's a sacrificial paradigm. I worked with the chief architect of sacrifice theory, a guy named René Girard, at Stanford when I was at Berkeley. And I have to say I stepped away because, even as a relatively young person, I could see there were a lot of directions that his thinking was going that I found very problematic. And it turns out Peter Thiel, the guy that started PayPal, was very influenced by René Girard. René Girard was very much in favor of finding those vulnerable scapegoats and dumping. Rather than powerful people fighting each other, holding each other to account, when we do something bad, we'll just blame somebody that doesn't have the wherewithal to fight back. And what I talk about in the book is this idea, which is very interwoven, I will say, into western society, and you could argue past that, but I wanted to keep some focus on the work.
One of the things that has happened in the last couple of hundred years is that the people that have always been identified as expendable, people like enslaved people or women or children or prisoners or foreigners, have self-identified as, my existence is not simply to be this cannon fodder. My existence is not simply to make somebody else rich or what have you. And so starting in graduate school, I've been a student of social movements, and that's one of the things that social movements do. But there's lots of other ways that people have always resisted being put in those categories.
And I think the inflection point that we're at now with that dynamic is the realization that a lot of people these days seem to be just clinging to a version of society that is pretty oppressive for the traditional scapegoats. Right? Any woman, men, and children of color, poor people, differently-abled people, immigrants, fill it in. The people that have benefited, or think they've benefited, are very afraid of what would happen if that underlying privilege went away.
What I talk about in the book is that we often forget that those folks have also been victimized by the system. And it's usually in ways where, inside their families or inside their communities, they get victimized, especially when they're young or in vulnerable situations. And there's kind of a contract, right? If you ever want to hold power, you can't ever talk about that. But take that anger that you feel toward your own familiars, and unload it on a Trayvon Martin, unload on a George Floyd, unload on a gay kid like that beautiful, young, nonbinary child who was beaten to death in a bathroom in Oklahoma for being nonbinary (Nex Benedict).
And I, again, think that we're facing a kind of developmental stage in our humanity where we can understand that that model doesn't work, and that it never benefited everybody, but even the people that thought they were benefiting, were always also caught up in a very toxic kind of dynamic.
So, like I say, I really don't write lighthearted romps. There's a lot in the book, but everybody that has talked to me who's read it so far tells me I connected some dots that make a pattern visible.
There's a lot of folks out there that will disagree with me; that's fine. But what I hope is that there's going to be a lot of people that find themselves in the stories that I tell and the history that I look at, the religious discourses that I look at, and recognize themselves and feel affirmed, and see a way forward. People who have been on the receiving end of being victimized. And maybe also, people who turned around and did unto others, who can begin to see where their own pain came from. My target reader is someone who has been thinking and reading about how we got to this hyper-polarized and hateful place, and is interested in how all those pieces fit together.
So, I’m not going to lie, it's a big book. Listen. I was a Russian major, so I've read War and Peace, and I've read Anna Karenina. Like, we do long and serious. But I think it's a rewarding book, and I hope people will find their way to it.
Well, we will definitely include a link, whatever link you think is the best one to include.
I appreciate that, Karen. Thank you very much.
You're welcome. I appreciate you giving me 90 minutes of your time for a really, really fun discussion, and really, really interesting. So glad we got to do this!
Yeah. Me too! Listen. You're amazing, and like I say, I'm so excited about the direction you decided to take, and to position yourself so that you can explore these questions, and just get the word out there, get people talking, because that's what we really need to do. Find each other, have the conversations, and then get ready and make the change.
That's the goal!
So thank you for the invitation. Fantastic.
Thank you, Mary. It's been so great talking with you!
Yeah. And we'll do it again before 30 years.
Yes! Definitely. 😊
Interview references
on Substack
Book info: The Architecture of Blame: The End of Victimage and the Beginning of Justice (Rowman and Littlefield, 2024) is available:
at Barnes and Noble at https://www.barnesandnoble.com/w/the-architecture-of-blame-mary-marcel/1144916080?ean=9781666944723
[1] Exploitation of labor for generative AI, “The Exploited Labor Behind Artificial Intelligence”, 2022-10-13
[2] “A battle royal is brewing over copyright and AI: Beware the Napster precedent”, The Economist, 2023-03-15
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”!
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
I did enjoy this piece and agreed with so much of what Dr Marcel said. You can use up to date sources in ChatGPT, however. The original training data was indeed no later than 2021 but if you specific up to date info and ask for sources, it’ll trawl the web and add that into the mix.