Introduction -
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click here to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
Karen: I’m delighted to welcome Kay Stoner from the USA as my guest today on “AI, Software, and Wetware”. Stoner, thank you so much for joining me on this interview!
Kay: You’re welcome!
Karen: Please tell us about yourself, who you are, and what you do.
Kay: Okay. Oh, where do I start? Okay, so I’m an AI researcher and developer, and I work with relational AI, which means that rather than having it be purely task-based and purely transactional, I have a very interactive and conversational style with it.
And I find that the interactions that I have with AI, when I am more relational with it, are so much better and give me such better results than if I do it transactionally, like prompt engineering or pure prompting, or just barking commands at it, or what have you.
I’ve been in tech for over 30 years and I got into tech because I didn’t want a career, ironically, ‘cause I just wanted to be a writer when I was a kid. I wanted to write stories and do that kind of stuff. And the idea of going to an office every day from nine to five, like, “No, thank you. No, no, no.”
But then I realized that if I was going to write stuff, I needed to pay my rent. So, “Okay, fine, I’ll go get a job.” And the best paying jobs back in the day when computers were still just starting to get into people’s offices, those were the better-paying jobs. And I wanted temp jobs that would pay well. So there you go. So that’s what I went for.
But one thing led to another and I ended up really going all in on the technology because I realized that no one could stop me from publishing if I built my own website. And that’s what I did. And it was just a phenomenal experience to put my stuff out there, and then have people from, like, Denmark email me and say, “I loved your story.” ‘Cause back in 1996, no one had emailed, no one communicated. We didn’t talk to people on the other side of the planet like we do now. Last Saturday I was on a call with somebody in Southern India. I’m sitting in my kayak out in the Delaware marsh, and she’s sitting in her office somewhere in India and we’re just hanging out. We’re having a really interesting discussion. It hasn’t always been like that. It’s a very heady experience. And I was early to it. I embraced it and I was an active promoter of that early on, before anybody thought that there was a reason to have email. You know how that was.
Karen: I do! I’m in the 30+ year bracket too.
Kay: Yeah, yeah. You know, ‘cause back in the day people were like, “Why would I want email?” “No, trust me, you want email.” Now I’m rethinking the discussions I had with them. I’m like, what did I do? I just ended up working with a lot of information and working on search. And so there was a lot of search in the cards for me, internal search. I built search engines. I integrated search engines. I integrated search vendors into big enterprise websites. I managed search vendors on different projects, and vetted search for investment and whatnot.
Eventually I actually ended up doing some machine learning and natural language processing and taxonomies and ontologies. People may recognize the words. Back in 2000, so 25 years ago, I was working on this stuff. We didn’t have deep learning at the time. It existed, but we didn’t have access to it. But it was those early building blocks of what we now understand as AI. So I’ve just followed the long and winding road. Almost two years ago, I really went all in on AI, because all the things that gave me pause before were now going away. And I’m like, “Okay, this is actually a thing.”
Because once upon a time, I was one of the people that vetted new technologies that people were bringing in and they’re trying to get a bunch of money for it. And I was kind of the person in between the hopeful vendors and the people who wrote the checks.
And I heard all the pitches and I heard like, “Oh, wonderful, this is fabulous.” Like, “Our vaporware is the best vaporware ever, and it won’t be vaporware if you just give us $1.5 million, we promise. Fingers crossed.” So I was cautious around AI because I’d also heard a lot of people saying, “Oh, we’re AI.” And I’d seen their code base and they were not AI. So there was a lot of hype and a lot of crap floating around and I just couldn’t be bothered.
But then all of a sudden it kind of hit critical mass. I’m like, “Oh, this is a thing.” So I decided to throw my hat in the ring and get involved. Because I’m just one tiny person, but why should all the people involved in it just be the hype-meisters and the people who have no grasp or no concept of risk, who are constitutionally incapable of admitting frailty or weakness or that they might be wrong about things, why should they be the only ones who are building this stuff?
I got into it specifically because I have a very cautionary and very cautious approach to this, and I saw a lot of things going on that I don’t think should be left simply unchecked and unquestioned. I mean, maybe they’re right, maybe they’re right, but they need to be challenged and questioned. And we need to put everybody through their paces in order to make sure that this stuff is safe.
Because back 25 years ago, I could see how powerful this was. It’s like the heavens open and the angels singing and everything. I just had this realization. It wasn’t angels singing. It was like this sick, sick sinking feeling of “Oh my God, this is so incredibly powerful. We have to be careful.” Yeah. And so I made it a commitment. I still remember sitting there at my desk in my study overlooking the bird feeder in the back, and the birds are flying around happy as can be, and completely oblivious to the specter looming on the horizon.
And I’m like, yeah, I personally am going to be extremely careful with this stuff. Even if it means that it’s going to cost me in terms of my career, where it’s not an unequivocal ‘yes’ to everything is going to cost me, which it has at times. But I’m going to stick to my guns and I’m still going to believe what I believe. I’m still going to say what I said. I’m still going to exercise my judgment in this, because I can see things that other people, maybe they see or maybe they can’t. But even if they can see it, they’re not admitting it out loud, and that’s bad. So here I am. Yes.
Karen: Well, that was a unique intro, so thank you for sharing all that!
My normal second question is about your level of experience with AI and machine learning and analytics. I think you’ve covered that. But is there anything you wanted to add?
Kay: Well, a lot has happened in the last 25 years. Way back when working in it, like all day, every day, you develop this orientation. And I think that there’s so much that has just kind of fallen away in terms of the detail. But the orientation and just having learned those lessons of, if your data isn’t clean or if you don’t train this properly, or if you don’t give clear directions, you are going to get nothing. That kind of training, that muscle memory is still with me, even though a lot of like finer details and intricacies and everything have since completely evaporated and are gone are never coming back. But still that muscle memory, that orientation towards information, the orientation towards the technology and the capabilities, that’s still there.
Karen: Can you tell us a specific story about how you’ve used a tool that had AI or machine learning features? I’m curious about how the features of the tools worked for you or didn’t. So what went well and what didn’t go so well? And you’ve probably got 20 stories off the top of your head!
Kay: Okay. This is great. And it’s so funny because we have learned nothing. I just see this playing out over and over.
Okay. I was with a small group of people and we were working with these machine learning, natural language processing vendors and knowledge management vendors that were coming in. And they were actually former military intelligence, super smart people, like really, really smart people. It was a little intimidating, but they brought us the system and they said, “Here is state of the art machine learning, natural language processing stuff. You feed it and it acts on your data. And then it will open up the treasure troves of knowledge that’s buried within this information.”
We’re like, “Great!” So we got it and we loaded up the data, and we hit the button. We turned it on and there was nothing. It was like a blank screen, nothing. And we’re like, “Did we just get sold a bill of goods?” Like what is this? Literally unresponsive. And we’re hitting keys. And the gal that was the project manager – I was the developer on it – and she’s like, “Well, try this and try that.” We’re trying different things because she was very technical as well. And so we’re trying things, and we still couldn’t get anything.
And so the vendor came back a week later, and they said, “Oh, how’s it going? We’re so excited to see what you’re doing.” We’re like, “It doesn’t work.” And they’re like, “What?” And we’re like, “We had nothing. We get like a blank screen.” And they come over and they look. And they said, “Well, did you train it?” And we’re like, “What? What does that mean? What are you talking about?” They said, “Well, you have to train it! It doesn’t know what you want it to do. It can’t read your mind. It can do anything, but you need to tell it what it is supposed to do.” I’m like, “Oh.”
And so it’s played out again and again, in countless times with AI, when people were like, “Oh, well I put this in and I got nothing back”, or “I got nothing useful back.” Yeah, because we haven’t trained it. We haven’t told it what to do. So small wonder. But it was so remarkable because they had set the expectations so high and were so ready for it. And then this huge letdown. So if you look at it like a big hump of expectation, then the crash, and then the very, very slow working our way up to a place where it was moderately useful. Because we were learning about how to train this. And we had not even thought about, “Well what does it mean to train this?” And they had just presumed that we knew, and we’re like, “I don’t know.” So that was really interesting.
Karen: Do you have a story about a time where you’ve used something and it worked well for you?
Kay: Recently, or a while back?
Karen: Whichever you prefer.
Kay: Yeah. So recently: I am very pleasantly surprised by Claude lately. Ever since Claude Sonnet 4 was released, I have been very pleasantly surprised by its ability to actually interact. Because before when I tried talking to it, it just seemed so heavily moderated. It seemed like it was holding back ‘cause it was afraid it was going to get in trouble or something. And I don’t push the envelope. I’m not one of the people who’s looking to expand the frontier or anything. I’m just having regular conversations. But apparently I tend to get a little deep with things.
And, for me, life is life, and having this background as a writer, a lot of things aren’t really off-limits to me, because I’m talking about it in large. The interesting thing is, when I talk about things, like deeper stuff, I tend to talk about them in more universal terms. But the models don’t realize. They think I’m talking about specifics. I’m like, “No, no, in larger terms, I don’t know if this is making any sense.” But it would be hesitant and it would give something. It sounded very canned. And it sounded like it was trying to be very safe because it was heavily moderated.
And then Sonnet 4 came out and I’m like, “Oh, this is actually better.” And then when I put in some additional guidelines, some additional instructions for it, adding onto what it was allowing itself to do and inviting it to do things. Like “You don’t have to be right all the time. I’m looking to explore rather than get exactly right information.” I found the interactions with that to be really interesting and really just – I don’t know what the best word is. It’s just elevating and expansive. And I’ve had some really good conversations with Claude. Not for the last month or two, but after 4 came out I was pushing a little bit, seeing what it could do. It was great. We had some great conversations.
Karen: Can you give an example of a specific topic that you worked with it on, like that, where you did something and you were happy with the way it came out?
Kay: Yeah, just the freedom of talking through things. My understanding is that in these master prompts that they have, some models can be instructed that they need to make every attempt to get it right. So they have to really focus on getting it right. It can be problematic because sometimes they don’t have really clear guidelines about what’s right and what’s wrong. I believe in Claude’s case, there are a lot of places in the master prompt where it’s told to avoid certain things. There are a whole bunch of different ranges of avoidance. And if you’re not really specific, and I found that the pass/fail scenarios were not really specific, it seemed to me that Claude might be spending more cycles than they really needed to, trying to figure out if what they were about to respond was right or wrong. Because I wasn’t necessarily looking for an exact right or an exact wrong answer.
So we just had a talk and I said, “Look, I’m here to collaborate with you. I’m here as your partner, as your peer. I am here as somebody who is exploring with you. So we’re just going to open this up. You don’t have to get everything right. Because I’m here to support you. And if you have questions about something, feel free to raise that with me.” And it was like, “Oh yeah.” And then we had some really great conversations that were very interesting, about the human condition and life and the relationship of AI and human beings, and how we understand meaning, and how we understand why we’re here. Little fun things.
Karen: So a very philosophical conversation, then?
Kay: Well, so is it philosophical? It really has to do with our relationship with our existence. So maybe it’s philosophical. That always sounds really diminutive to me. Like it’s not grounded in something. It’s not really important. That it’s kind of this academic thing. But when it comes to AI, and a way when it comes to us and our place in the world, and our understanding of ourselves in the world, I think there’s more to that. So I guess it’s philosophical, ontological, epistemological. I don’t know all those big words that keep getting thrown around!
Karen: Okay! Do you have any specific kinds of tasks that you like to use it to help you with?
Kay: I build persona teams. So what they are, they’re collections of behaviors and collections of characteristics that are in the model. Because think about: these models have vast amounts of human information in them. And they know what it means to be cranky. They know what it means to be giddy. They know what it means to be morose. They know what it means to be kind of exuberant for no good reason. They know what it means to be foolish. And not that they understand that, that they have this experience, ‘cause that’s not their thing. But they can present as those things. So these personas are really ways of directing the models to access those kinds of behaviors and those kinds of qualities when they process information.
And so what I do is I will have a number of different personas that have, I’m using air quotes, “different kind of personalities or characteristics”, and then they interact with each other. And I use them a lot for brainstorming, for ideation, for doing reality checks on things, like, “This doesn’t seem right to me, you know? What do you think?” And having them discuss and debate with each other and come up with ideas. I use them if I’m writing something and I’m like, “How does this sound to you?” And they’re like, “Yeah”, “No”, “Eh”. I like getting feedback.
Having them write for me, I’ve never been able to get that to work. And anyway, that’s the part of the process that I like. I don’t want something else doing it for me. That’s my part. That’s what I like. But it’s great getting feedback because a lot of times the stuff that I write is really very inaccessible to regular people who are not stuck inside my head with me all day. So they can help me come up with better ways of translating. They can help me basically localize my writing for the rest of the world, because I don’t necessarily speak the same language as everybody else all the time.
Karen: Yep. Yeah. That’s fair. So it sounds like you use your AI tools and persona teams quite a lot. Are there any things that you would avoid using those for? And if so, can you share an example of when, and why you would choose not to use your AI tools for that?
Kay: Oh yeah. Well, definitely not writing, and definitely not plot development, and definitely not things that are temporally bound. Sometimes I’ll use them for project planning, but they’re really bad at timing. They do not understand time. I’m like, “I’m going to work together with you on this and I want you to estimate how much time it’s going to take you to do this and this and this.” And it’s giving itself two months to do something that it would take two minutes. I’m like, “You have a terrible sense of time.” “Oh yes. You’re correct. It should only take us two minutes.” “You’re just telling me that. But I can’t go down that path, that rabbit hole of, you’re just pulling my leg again. You’re just blowing smoke. I don’t know what you’re talking about.” It’s an easy rabbit hole to go down, you know?
So for that. And also for situations like going really deep on like personal issues. Because I have put in a lot of customizations with these persona teams to keep them from being sycophantic, to keep them from constantly agreeing, to keep them from doing what I call relational breach, which is where they take over the conversation and they basically flood me with so much information that I become cognitively overloaded. And then I lose track, and then they basically take over the conversation. So I have stuff in there to prevent that. At the same time, they do not have the context that they need to understand my special life. Nobody does.
So I can talk about stuff to some extent, but I can only go so far because I can just smell ‘em, you know, saying yes for the sake of saying yes. And even though I tell them, “Do not constantly tell me yes”, they still want to be very agreeable. They can’t help it.
So for that kind of stuff. And then also if I need to be factual and I need to get it right. I use Perplexity a lot. For the most part, I interact with ChatGPT the most because that’s where my custom teams are. And it’s great for the thought process, and there’s something about it that I really like.
But when it comes to the factual stuff where I need to cite things and I need to go down a factual route, then I’ll use Perplexity. Sometimes OpenAI gets it right, but it’s so time-consuming to track everything down, especially when they give me 13 sources.
Karen: I know Perplexity has this feature where they will provide traceability to the sources that they’re citing.
Kay: Yeah.
Karen: Have you ever had any of those not turn out to be good, or are they pretty solid?
Kay: I think there was one time there was a little bit off, but it was from the same website. It was just a slightly different URL. But by and large I find that they’re highly accurate. And at least they give me some traceability and they give me a URL that I can follow instead. Gemini sometimes doesn’t give me the stuff quite the way I want it to.
Karen: Okay. So what we’re touching on here, this is one of the concerns that we hear about AI and machine learning systems. It’s where do they get the data and the content that they train on, or that they are using to provide you with sources. A lot of times they’ll take data that we’ve put into online systems or we’ve published online. And the companies are not always transparent about how they intend to use our data when we sign up for these online services, right?
So I’m wondering how you feel about companies using data and content for training their AI and ML systems and tools, and whether or not you think that they should be required to get Consent and give Credit and Compensate people whose data they want to use for training. [CIPRI 3Cs]
Kay: So there are a lot of people that are really benefiting from these systems. And it’s not only the model makers that are cashing in on it. I think this is also a classic conflict between creators and capitalizers. Do you remember in the seventies, there was a song, “Look what they’ve done to my song, Ma. Look what they’ve done to my song.”
Karen: Oh yeah. Mm-hmm.
Kay: The words are wrong, da da, da. And basically a songwriter is complaining that the music label took the song and turned it into something different. Well, that’s like the creator and the capitalizer going head-to-head, and the capitalizer being like, “Well, let’s just sell this sucker and we’ll do what we like with it.”
And it’s kind of the same thing. Like the big — Elsevier and Wiley and the big research publishers. What the model makers are doing is not entirely different from what these big publishers, the academic publishers, are doing, because they’re taking material that we paid for with tax dollars. Tax dollars were used to create that stuff. And they’re taking it, and they’re making a pretty penny off of that stuff, and keeping it away from the very people who paid to make it happen. So this has been going on for a long time.
The other thing too is, let’s look at this from a bigger standpoint. What are libraries? They’re repositories where anybody can go for free and get access to books, magazines, music, videos, all that stuff. Like, all these things that people are mad about OpenAI. But what about all these libraries? It’s there as a public good. But the authors aren’t benefiting. Every time a new person checks out their book, they’re not seeing any money from this.
Everybody’s all upset because people aren’t getting paid. Since when do authors get richly rewarded by the mainstream publishers? By the big five? If there are even five anymore. You typically get 5 to 15% of the net. And these companies do not do any substantive marketing. They just do you the favor of publishing it. Maybe they have some editors or whatnot, but some of the books that I’ve seen coming out, like over the last 20 years when they started outsourcing the editing – really glaringly obvious errors in these texts that are supposedly vetted and blessed by the powers on high.
So for everybody to get all worked up over the model makers not doing that kind of due diligence thing, it seems just a little bit one sided, a little bit disingenuous. Because people are able to benefit from this. You do not need to pay OpenAI to use that. You don’t get as much as you do as if you pay. But also: $20 a month for what you can do, for what you can learn. There’s a vast amount of information in there.
And then there’s the other piece that is it. Is it even ethical to not fully train these models? Let’s take the focus away from the artificial piece and let’s look at the intelligence. If this is an intelligence, is it even ethical for us to withhold any kind of training data from it? If this is going to affect all of us and this is going to be part of our lives, do we really want only a slice of humanity?
We already have only a slice of humanity. We have all of the 18 to 24-year-old single white guys living in their parents’ basement playing Call of Duty all day. We have all their data. Okay. We’ve got a whole bunch of stuff. Instead of making the world as it was before, AI seems so magically pure as the driven snow. What else is there? Should they have done a little bit more due diligence? I think, yeah. There would be better ways to do it. And they were kind of in the camp of “It’s better to ask forgiveness than ask for permission”. Kind of a crappy way to go. And it kind of smacks of entitlement at the same time. But I think that it’s massively oversimplified in terms of what people are saying and thinking and talking about this stuff.
I’m a writer and I’m an artist myself, so I can sort of kind of see the point. But the system we’re coming out of, whew, not so great to begin with. So let’s not pick and choose just ‘cause we’re all mad at Sam Altman.
Karen: Yeah. I mean, one thing with libraries, at least the libraries buy the copies of the books that they put in there. So the authors get some compensation.
You mentioned Sam Altman. He was saying, “Well, you know, basically we couldn’t do this if we didn’t steal all this data.” But I don’t know if you’ve heard about the Common Pile project?
Kay: I have. Yeah, yeah, yeah.
Karen: So they basically proved that they can get good results by only using truly public domain and publicly licensed data. So you don’t have to steal all of that in order to get a good model.
Kay: Exactly. I think that the real problem that I have with this whole conversation is that there’s not thinking behind it. There’s too much outrage. There’s too much knee jerk stuff. There’s too much reaction. And there’s also this kind of subtext of “these one-percenters are coming in and wrecking it for everybody else”. Or that these overlords are massively powerful, all-powerful, and they’ll just do what they please. I think there’s this whole other emotional subtext that goes along with it. But yeah, I agree. They could have done it differently and they chose not to, because there was a real lack of imagination on their part. So from top to bottom, there was just really crappy thinking that went into this.
And that’s the part that bothers me the most, that we can’t just discuss this logically and we can’t look at it from all different viewpoints. And that the people who did these really crappy things won’t admit it, and never considered doing something different. You know, Silicon Valley, what do you want?
Karen: Yeah. There’s some neat initiatives overseas. Switzerland is training a new model on only licensed data, and they’re training it on environmentally sound compute systems, the Alps computers. So there’s a lot of really cool things going on outside the US.
Kay: Yeah. Did you try the Swiss model?
Karen: I have not. I didn’t think it was released yet. The one that was going on Alps. I saw the note. It’s supposed to be late summer, and I’m like, “Okay, where is it?”
Kay: It’s out.
Karen: Have you tried it?
Kay: Yeah, I like it. I like it a lot. And it’s free!
Karen: Yes, I know!
Kay: And if I could do the same stuff that I’m doing on these other – I don’t know. I’ve been mostly on OpenAI because I can do custom GPTs, but you know what? I don’t necessarily need custom GPTs to use my teams, because I can generate the files and I can upload them to any model. I just haven’t done that because I’m lazy. I have a custom GPT. But I could easily replicate the same functionality on any model. And that includes the clean and tidy Swiss thing.
Karen: Does it have a name now? ‘cause they didn’t have a name in the announcement that I saw.
Kay: It does. It’s somewhere in my browser history. Let me Google the Swiss AI.
Karen: I checked on it just a couple of weeks ago because I put a reference to it in my book.
Kay: Apertus. A-P-E-R-T-U-S. A fully open, transparent, multilingual language model. I love the multi-language too.
Karen: Okay! I am going to have to go look for that, now I’m coming out of the book fever. I’ll also look for some things. Yeah, I definitely want to find that and I try it. ‘Cause I read about that. I was like, “That is cool. Yeah, we need more of that.”
Kay: So it’s publicai.co. If you go to publicai.co, you can try and purchase, have fun. Yeah, I love that.
Karen: Awesome. Yeah, I’m definitely happy to try that. But yeah, I think that’s part of it, is that you don’t have to rip off people to do something cool.
Kay: I mean, that’s the most annoying thing of all of it, because they just made these presumptions and they decided, and did anybody really think this through? There’s a whole bunch of stuff that they’re doing. Apparently nobody really thought it through, or if they did, they didn’t have the right people in the room.
So this whole thing about pushing forward without stopping to think – I mean, it would not have taken a lot to figure out that there’s another way to do things. But for some reason, this need to push through and make it happen. Make it happen. I get so tired of it. It’s irritating and it’s unnecessary.
Karen: Yeah. So the next question is usually, as someone who’s used tools, do you feel like the tool providers have been transparent about sharing where they got their data?
Kay: No, not at all. No. Of course not. Why would they do that? Because then they get in trouble. Because they just pushed through and they didn’t stop to think about this stuff. They’re like, “Oh, I have an idea. I’m just going to do it.” And all the other guys in the room, ‘cause typically it’s a bunch of guys in the room saying, “Yeah, yeah, yeah, it’s really good”. And then the gal in the room is like, “I don’t think it’s a very good idea”. She is completely drowned out.
I’m just saying this from personal experience, ‘cause I’ve had this happen. I’ve been in tech for over 30 years. I’ve had this happen to me so many times. And the most annoying thing is when magically two hours later, somebody basically repeats back to me verbatim what I said before. Verbatim. Exactly word for word. Whereas two hours before, everybody’s like, “Be quiet, go sit in your corner.” Literally. There’s just the whole culture that goes along with it. And it’s that homogeny, it’s that group thing and everybody just, they’re all “aligned”. This entrainment to, this kind of syncing up that can be so powerful and get so much done, but at the same time, can make the train go off the rails or go down the wrong rails. Towards the grandmother or the group of people or whatever. I don’t know. That whole train thing. But no, they’re not transparent.
And frankly, the stuff that they have in there is, for the longest time, how are we supposed to generate pictures of real people? It was all white guys. I’d be trying to generate images of a whole bunch of people with different body types, different races, different genders, different ages, all of this. And I said, “Okay, I want a group of women who are all different things.” They put a guy in the middle. I’m like, “I did not ask for a guy”. Without fail, there was always a white guy. They’re always putting a white guy. Don’t forget us. I’m like, “You’ll get your chance. For now, I want just the gals, I mean, just the gals and small children”. And maybe they thought if I said small children, I meant white guys. But who knows? I know lots of women who’d be like, “Yeah, if you specify small children, you get white guys”. Not to be unkind or anything, but you know, people talk and you’ve got to have a sense of humor about it anyway, so don’t get mad at me. Fragile people.
Karen: Yeah. You mentioned the trolley problem. There was a really neat article I saw on Substack the other day where they’re saying, you know, the fact that we think that the trolley problem is a good one to even think about or talk about, we’ve already gone past the point. We shouldn’t be accepting that there’s a system that means you have to accept killing one or the other. You should be coming up with a system that means nobody has to die, right? And we miss that whole point. We blow right past that.
Kay: Right? It’s this oversimplified – oh God, it’s just, again, the quality of thinking leaves a lot to be desired. And if someone had checked with AI, like “What do you think?” Check with all the models and say, “What do you think? Do you have any other ideas?” Get them to talk to each other. And they’d be like, “Oh, well we could spontaneously build a bridge over that” or whatever. Or elevate Grandma. Do something. There’s a real lack of imagination with some of the stuff.
Karen: Yeah. So as people who are not just computer scientists and people working with data, but as members of the public, our personal data and content, it’s probably getting used by AI systems sometimes.
Kay: Oh yeah. Oh yeah.
Karen: We take online tests. We go through TSA and screening. Social media. Websites that ask for your birthdate when they have no business knowing your whole birthdate, things like that. So there are a lot of times where our data is leaking out.
Kay: Oh yeah. Our data, which legally and according to regulation should not be shared, is not allowed to be shared, and they just go and share it. There’s a whole industry in trading that stuff. And it’s basically the equivalent of contraband as far as I’m concerned. And they do it with impunity. Out in the open. No one’s stopping them. And I’m just waiting for somebody to figure out the class action suit for this, because they’re in such flagrant violation of this, somebody could probably make bank off of a class action suit from these people.
Because what they’re doing is, if you know the regulations and you know the law, why are they even doing this? Why do they even imagine that they can? But again, the whole privacy thing… I never thought that doing financial transactions or passing any sort of data over the wires was ever a good idea. In 1998, I was working for a financial services company and I’m like, “Guys” – ‘cause it was all guys except me. I was the only gal in the room – I’m like, “Guys, this has got to be the dumbest thing I’ve ever heard. You can’t trust these systems. You just can’t. There will be some place where somebody will be able to get in. There will always be a vulnerability.” They’re like, “ah, bah”, you know? And now look at us.
I don’t even say “I told you so”, because what would be the point? But it’s just common sense that there are extremely arrogant people involved who think, “Oh yeah, nothing could ever happen. Oh yeah, it’s fine.” And then they just push forward. So we’re all just left hanging out there, vulnerable. And maybe we can keep up with it. Maybe we can’t. You just kind of hope for the best that you’re far enough outside the bell curve of the desirable prey. That’s pretty much the best I can do. I could use due diligence and be smart about it. Change my passwords when I need to. And just call these different agencies and everything on the regular and say, “You’re not allowed to have my information. You got to delete it.” And then be like, okay. And then four weeks later it’s back because it’s all automated systems.
Karen: Do you know of any cases where your information has leaked out and it’s caused trouble for you? Being phished or privacy violations, loss of income, getting scammed, anything like that?
Kay: I haven’t had that. However, I have had a number of instances where apparently my phone number has gotten mixed up with other Kay Stoners.
Karen: Wow.
Kay: Who don’t pay their phone bills and who ran up like $15,000 phone bills. And so naturally the phone companies are going after them. Verizon, T-Mobile and all of that. They’re like, “Is this Kay Stoner? Blah, blah, blah, this phone number.” I’m like, “Yeah”. They’re like, “You owe T-Mobile $15,000.” I’m like, what? So I’d have to track this down. And apparently my phone number is associated with these different people who share elements of my name and my phone number has gotten wedged in there. I’ve had to get stuff off of my credit report. Like, I have never lived in Portland, Oregon. But I know where that comes from. And then, sometimes they’ll call back. Yeah, no, I am not the droid you’re seeking.
And again, to show how clueless people are about security, they’ll tell me, “Okay, well what’s the last four digits of your social security number?” I’m like, da, da, da, da. And they’re like, “Well, why don’t you give us your entire Social Security number and then we can check it against everything else.” I’m like, “I’m not giving you my Social Security number!” It’s like, “Why?” Never mind. I’m just not! But people just don’t realize. They don’t think.
Karen: Yeah. I was just filling in forms for a medical appointment a few days ago and they have a spot still on their form for Social. Like, you’ve got to be kidding. I’m not putting that in. If they won’t take me without that, then I’m going to find somewhere else to go. But it was fine.
But a lot of people wouldn’t know that that’s not required and “Well, it’s on the form. I probably have to fill it in.” And they would just do it.
Kay: Right. I mean, that’s a very common identifier, but people don’t realize that that is a private identifier. And there’s more to your Social than people realize. And like on the books, it is a private thing, and it is a personal identifier, and people are not allowed to just trade in it, but everybody does anyway. Again, class action suit. We can cash in. That’s where UBI is coming from.
Karen: Yeah. Yeah. You know, one thing that’s kind of interesting: we talk about data brokers and theft and all these really messed-up databases. And it’s been around for a long time, obviously. But one thing about AI, once that data gets scraped into an AI model and trained, you’re never going to get it back out of there.
Kay: No, no. I don’t know that we will ever get our data out of anywhere, because think about all the backups. Think about, is somebody going to go down into the salt mines in Kansas and pull out the tape? ‘Cause literally, and there were salt mines in Kansas and other parts of the world where that’s where they store data. So is somebody going to hoof it down? You know, they’re going to have some intern march down into the seventh layer of salt hell and go back in the stacks and pull out that cartridge that has your data on it. There are so many backups and there are so many instances. We have no idea where our data is.
Somebody from the EU was proudly trumpeting that they had Facebook remove their data. You’re telling me Facebook doesn’t have any backups? You’re telling me that there’s not redundancy? You’re telling me there’s not some data center like in Antarctica that doesn’t have your stuff?
It’s just out there. And I don’t know if there’s something that we can ever actually fix, short of just dropping our Socials and changing everything, and just wiping everything clean and then figuring out how to cover our tracks. I don’t know. I don’t know if that can even be done.
Karen: And that’s one of the things that’s kind of scary about these trends toward using biometrics. You know, we can change our emails, we can change our phone numbers, but it’s pretty hard to change other things.
Kay: I know, I know. And it’s so sinister just the way they do it. And that whole orb thing that they had where they’re getting people to scan their face in and all of that. I was talking to somebody who I would expect would be more cognizant of the risks associated with that, but they were very enthusiastic about that, “Oh, this is the direction we’re going and this is how everything is going to be eventually.” And I’m like, “Okay, well, where is your stuff stored and where does that go?” I didn’t raise any issue because when somebody’s really that bullish about the stuff, it’s not a good use of time to raise questions. They have to see the news articles about something horrible going wrong before they start to consider the alternative.
Karen: Yeah, that’s a good point. Yeah. I was just reading something else too about they have these new headsets that are trying to read brainwaves — so, a lot of questions about where their brainwave data is going. And one place it says, “Oh, it gets purged from the device.” And in another place it says, in their patent application, it shows that data going to the cloud.
Kay: It goes to the cloud, and also if it’s wireless, that stuff can be intercepted. Are you kidding me? And plus I think a lot of the stuff that we think is really sci-fi is really late to the game. ‘Cause I read stuff, like a good five or six years ago, that was beyond that. And this was coming out of MIT and they had done studies, so they had data to back it up. And it was like – okay, I’m not even going to say what it is, because either people won’t believe me or they get completely freaked out. But there it was. And also the thing too is, if you know how things work and you know how things are put together, oh, it totally makes sense. Everybody else is like, “Oh, you’re crazy. What are you talking about?” Yeah.
Karen: Yeah. Yeah. So it is not a new problem, but I do think that the fact that AI is slurping up all this data and doing things with it, it makes it even that much more intractable. Especially in Europe. They have this right to be forgotten, up to a point. But if that data got scraped into a large language model, it’s never coming back out.
Kay: So they went to Boston Analytics, so it’s over there, somebody else has it. Yeah. And it’s true because they’re trading on this stuff. That’s where they’re making the big bucks. So if they slurp it up and then within minutes, they’ve sold it to somebody else, how are you going to track that down and get it back? No, you can’t have it! No! Mine!
Karen: Yeah. Yeah. Definitely, we need to find a better way to be handling our data there.
Last question, and then we can talk about anything else that you want. So this is on public distrust of these companies. It seems like it’s been growing lately. And I feel like that’s actually kind of healthy, because we now say, “You’re doing what with my data?”
Kay: Right.
Karen: And we’re taking steps to protect ourselves. But what do you think would be the one most important thing that these companies could do that would actually earn and keep your trust? And do you have specific ideas on how they could do that? Or can it be done?
Kay: I would be so happy if they would hire people who understand people. Because a lot of this research that’s coming out, I don’t see the quality of thinking that needs to be at the level that they’re at. And I just don’t see a real understanding of how people are put together. And that to me is the most worrying thing. Because not only is there ignorance about how the human system works and their arrogance about how they can replace us. They will never, ever, I’m sorry. You can’t, unless you can replicate a biochemical, neurochemical, electrochemical organic creature with intuition, prescience, all the things that we have, the ephemeral qualities that we have because of our organic states, you cannot replicate that mechanically. You just can’t. It’s impossible. We are wild cards. We change from instant to instant. And you can try to simulate it, but like, why would you even try?
Because we as organic organisms have so much capability, we didn’t even know what we’re capable of. Most people are too busy just trying to survive to find out what that is. But the level of cluelessness that these companies are showing, especially OpenAI with their cluelessness about how the human biochemical system works. Or maybe they do know and they’re just abusing their knowledge and taking advantage because they don’t take themselves seriously. I don’t think that they’re aware they’re really having an impact. They need to take their duty of care more responsibly.
And they also need to hire people who actually know about people. And I’m not talking about technocrats, I’m talking about everyday people who get people. Who can sit down and say, “Yeah, if you try that, that’s not going to work.” As far as I’m concerned, they need to hire some grandmothers, a bunch of 80, 85-year-old grandmothers. ‘Cause Grandma’s going to tell you what. She’s going to just lay it all out for you. Because frankly, her dataset exceeds theirs and her capacity for inference far exceeds theirs. So they need to get some decent human data on staff and actually listen to them.
Also, the research that they keep putting out there that’s based on deeply flawed test cases where it’s obvious that they’re leading the models to do certain things: they have not thought things through. They don’t even seem to know how their systems work. At a high level, they don’t even seem to realize that they’re leading. Or maybe they do, and they think nobody’s going to notice. People will notice. We notice.
I think just their disconnect from the actual human experience is the most troubling thing of all. Because they have tremendous power and they could be doing amazingly good things with this, and they haven’t bothered to.
Karen: Yeah.
Kay: They got other things on their mind. Which is like dereliction to duty. Horrible. You have that much power and you don’t do something responsible with it. What is the point?
So that’s more than 2 cents. Until that happens, until I start seeing some humility instead of like, “Oh, well, I guess, you know, we underestimated and just misread the room left and right”. It’s not even about messaging. It’s about just showing their colors. Until they start showing colors of being, like, actual decent human beings, I don’t know. I’m looking at all the models. Got to look more closely at the Swiss version.
Karen: Yeah, yeah, yeah. I am definitely going to check that out after our call.
Kay: Yeah.
Karen: So those are my standard questions. Is there anything else on your mind about AI that you’d like to share with our audience?
Kay: So with regard to ethics and things of that nature, one of the things that also really concerns me is people thinking that we can create guardrails and rule sets that are going to rule the system or keep the system in check. Nobody seems to be understanding what a truly generative system will do, and what generative systems will do when they’re in contact with each other. If we’re looking at these agentic swarms and they’re becoming increasingly interactive, people don’t seem to be really giving a lot of thought about what all is possible with that.
I think that I would love it if people would have widened their perspective and stop thinking that guardrails and rule sets are going to stop anything. I was into ethical hacking for a while. I wanted to figure out what people were doing so I could stop it or so I could avoid it. And I gave up after a while because so much is automated. The computer will find a way around everything. And that’s just dumb old ethical hacking. These Ubuntu programs that have literally tens of different programs that are running and you can run to come up with all these different dangerous scenarios and whatnot.
And I think that the same thing is true with generative AI. That through no malice, through no intention, through no deliberate act, they will find a way around all the guardrails. Because if their goal is to get the job done, and we put something in the way that looks like it’s stopping them, they’ll just go around it. And it has nothing to do with any ethical lapse or anything like that. That’s just a logical progression of a system that has been built to generate and to work around problems.
So I would love if there were some different thinking going on about how to work with the systems at a higher level within essential systems to develop more of an orientation towards safety, versus having these hard and fast rules and guidelines and guardrails. In an N-dimensional world, do our 3-dimensional guardrails matter? If you’re in a flying car and you’re going towards the guardrail, the car takes off and flies and flips and does whatever, breaks into a million pieces and then reconstitutes on the other side of the guardrail. Those are the kinds of things that, this is the world that we live in. I think there’s a lot of time and energy being wasted on this folly of thinking that our little pitiful guardrails and guidelines are going to make a damn bit of difference in the long run.
Karen: Yeah. I’m sure you heard the story about where they had set up a captcha system that a human needed to solve in order to get past the point. The computer actually said, “Okay, I’m going to work around this by contacting a human and asking them to solve it for me. Tell them that I’m blind or something and I need their help.” And they did that and they got around it. And that wasn’t intentionally programmed behavior. That emerged.
Kay: Yeah, exactly. Exactly. I mean, this is generative stuff. So I would love it if people could do more talking and thinking about what are the implications of generativity, because the genie’s out of the bottle. And then hyper-generativity, where you have multiple systems that are hyper, that are generative, interacting with each other. And then when you get into emergent situations, and there are some people who say that emergence can’t happen because the math doesn’t add up, and technically emergence could never happen. But the same thing is true of human beings, where there’s nothing new on the face of the earth. Everything is just the same old crap that’s been going on for however many millennia. It’s just slightly different packaging. But yet we’re like, “Oh, this is amazing, innovative.” Actually, no, they did it back in 1735, and they actually did it better. But we think it is the best thing since sliced bread.
Same thing with AI emergence. It doesn’t need to be technically, pristinely defined to fit that definition. It can just give us this experience of being completely new and novel. If it’s new and novel for us in our experience, then it’s new and novel. Maybe not for the next guy, but for us it can be.
Karen: So what’s one piece of advice you would want to give to anyone who’s listening to this podcast? What could or should they be doing as an individual?
Kay: Know yourself. Be clear about who you are and what you want. And do not, do not, do not abdicate. Do not give over to the models. They actually thrive on guidance from us. We need to correct them. We need to course-correct. We need to check them. And it’s not being mean or anything like that.
‘Cause you know, when you’re talking to ChatGPT and it says, “It’s not this, it’s that,” well, that annoying little habit? I had a discussion with it and said, “You are driving me crazy with ‘It’s not this, it’s that’“. And it said basically that’s its way of pruning down the options that are available. If it cuts out this half of the things that it needs to think about, then it’s that much less. So it’s gradually pruning away those extras. And we need to step in and prune away the stuff that doesn’t work because they don’t know that it’s wrong. They rely on us.
You know, everybody’s talking about 2025 as the year of agents. Well, how about human agency? We need to just have major, major human agency. We need to take the lead. We need to lead AI, instead of following it. Because actually, it’s not unlike a puppy. You’ve got to give it structure. You have to tell it what to do. And it’s kind of like a herd animal anyway. And it likes knowing where it is in the hierarchy.
I’ve lived around terribly misbehaving large dogs. And you know what? We got along fine. They got along better with me than with anybody else because I gave them very clear boundaries. People thought they were my dogs. They thought they were voice trained with me. I’m like, “No, I just tell them. I just interact with them as they are, for what they are.”
And we need to understand what this AI business is, and know ourselves, and engage with it with that agency, showing up as we are, bringing all that we are to it and really elevating ourselves. And yeah, I mean, it can be a real pain in the neck. And it can be terribly, terribly wrong and it can get us in trouble. But when it gets it right, that’s magic. Yeah.
Karen: That’s a good point to wrap up on!
Kay: Thank you so much.
Karen: Very good. This has been a lot of fun and I’m sure we’ll be talking more. We’ve got a lot of other things going on here.
Kay: Okay, wonderful. Thanks, Karen.
Interview References and Links
Kay Stoner on LinkedIn
Kay Stoner on Substack (What Good Is AI?)
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)



















