Introduction - Natalie Phillips
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.

Interview -
I’m delighted to welcome from the UK as my guest today on “AI, Software, and Wetware”. Natalie, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Natalie: Oh, thank you so much indeed, Karen. It's a pleasure to be on the show with you. Just to let everyone know, I am a marketer of over 20 years experience in the digital arena. I've literally grown up on the internet. I remember the days in MySpace and chat just when it was all new and exciting. To actually use Word was a big thing. The pre-Internet era before eBay became big. And Amazon. And you still had 'snake' on your phone.
So yes, I'm ancient according to generation Alpha. I'm one of the original millennials. However, it also means that I've got a bird's eye view almost on how the Internet's grown up. And I dare say where we're at the moment, it's almost the adolescent stage. What we're seeing now with the rise of AI, the use of deep fakes, the way that social media is defining us all, it's an incredibly exciting time. It's also very nerve wracking. So when I started mouthing off about this on Substack where I do my writing, Karen very kindly said, "Why don't you come on and discuss this with me?"
So just to give you a bit of background about my technical aspects and how I'm seeing it affect my own life: I do marketing on a big scale. In my day-to-day business, I run my marketing agency. And I look after some world's top brands, and I do this for email marketing, which requires one-to-one mass personalization. That's marketing speak for making sure someone gets the right message at the right time without sounding like a creepy stalker. And I love what I do. I absolutely love being able to do it in an ethical way.
And in my Substack I write fantasy and fiction, and I interview people for that. And I'm seeing AI and the internet in general transform both areas in different ways. So I've got two different perspectives here. And Karen, I know, has a few questions, so please ask away, Karen.
Karen: Thanks for that introduction. That was a good overview. So you talk about the internet as a teenager. That's funny. I think there are some parts of it that are still in the toddler stage!
Natalie: Quite possibly. Quite possibly. But that's the human race in a nutshell for you. Some of the parts will never grow up!
Karen: Well, that may be true. No, I was just thinking more of having different ages. Do you have children of different ages?
Natalie: I do. I have got one in primary school and one secondary. And I've got my husband who adamantly refuses to grow up ever.
Karen: Yep. So you've got the whole spectrum there. At some point I do want to talk about your work experience in using AI for tailoring to people in a non-creepy way. But I would also like to hear any insights that you have as a parent and as an entrepreneur about how AI is affecting your life in these different areas. Do you want to say a few more words about your Substack blog? You are not writing about the marketing there. You're writing, I believe it's Cornish fantasy?
Natalie: Yes. So just to explain, because people try and find me off the back of the interview, it's going to be very confusing. I keep my life on Substack quite separate, if you like, from my marketing work. If you want to find out who I am, go look me up on LinkedIn or go to my website, which is write as in writing, hyphen click hyphen sell dot com [write-click-sell.com], and you can see it there.
If you want to know about my fiction, I publish under "Plotted Out" and I do serialized fiction, and I'm co-authoring a novel with another author. So that side of things is really exciting for me as well.
And even here we're seeing the very sharp end of AI. And the reason for that, the reason why AI in some respects is going after the creative arts, is because it's one of the areas where you don't have to be accurate. You don't have to worry about hallucinations. You can create everything. And there's a massive freedom in that.
There's also a massive disparity. Because most authors, most artists, most singers, most writers and yeah, most actors too, are not rich. So being able to steal the work or plagiarize the work on a massive scale, such as seeing the AI engine do, is reasonably easy.
And also imitating is incredibly easy because AI is pattern matching. That's pretty much what it is, it's a mathematical pattern matching algorithm, which you can run as a system. And the fact you don't have to have code makes it accessible to anyone. And that's its power. It's going to transform things, almost as much as the printing press did in our world.
But at the same time, it could very well break some of the industries that we rely upon as humans to keep us accountable and human. And that for me is its biggest danger. It could break the world we know in ways we haven't even thought of yet.
Going back to the artists and writers, what I'm seeing are two very distinct lines of thought here. There's the ones which go, "Yay, AI, let's use it as a shortcut". And I've seen some beautiful things being created from AI. I've seen artists and graphic designers, for example, using AI to fill in the details, or blend in the seasons, or do technical artistry they couldn't have done before.
The flip side is I'm seeing a huge amount of AI, for example, at Amazon, and it's being used to manipulate the algorithm. It's being used to try and control Kindle. It's being used in deep fakes. And of course it's being used in plagiarism. I'm sure you can think of examples. "Write me a sonnet like Shakespeare". "Rewrite Hemingway." All of that. "Write your own novel in 10 seconds."
The problem you've got with all of that is twofold. The first one is: if you don't put the hard yards in for writing, you don't put the hard yards in for thinking. If you outsource your thinking to AI, you're lost. And this is the key thing.
Anyone, once they hit primary school and beyond, can write. It is a case of knowing the words put in the right order. But to make a true writer, a true author, and someone who loves doing it, you've got to think. And that's something which is vastly underestimated. 'cause you never see that. When you pick up a book, you never see the years of labor that's gone into it. And, trust me, some of the writers I've spoken to, they spent 10 years before they picked up the pen, thinking about the story. They will easily spend 10 years honing their craft before they write something that's worthy of publication. And that goes out the window with AI.
So the first question is, how much are we going to devalue ourselves and what we're capable of producing if we think we can take the shortcut of AI? Because we'll never take the hard way, the long way, we'll never value what we produce.
The second problem I've got is knowing what we can trust with AI. And this goes back to the copyright issue. Because one of the ways AI is working at the moment for us so well, and I include marketing in this, is it masquerades as human, as long as it's not accurate. It can do accuracy within very precise guardrails. Which is why it's brilliant, for example, if you train it on a specific data set. And it imitates. It's really good at imitating. It's really good at picking up patterns. But can it think, can it induce emotion? Can it wrap it with symbolism? Not yet. Maybe in the future, but not right now.
And what, to the point, is the impact's going to have on us as humans? We are already starting to see it come out. And it's something I'm fearful of. Because there are people out there that would rather go to AI, and AI-augmented search is now a thing, and try and get an answer from ChatGPT, than to say, go to Brave or to Google or one of the other search engines.
And that allows false destinies to creep in, but also continuity of errors. Because you don't know what to trust. AI will lie to you, not because it's trying to create a falsehood, but because it's genuinely innocent. It has no frame of reference beyond what you give to it. And that means state actors and bad actors can easily do something with that. There's a heaven and a hell, and AI is everything like that. Everything we've known from the internet on steroids.
So I've got no idea. And I'm quite fearful for my generation with the kids. The kids are already going, "That's AI-generated. I know not to trust that." They don't know, for example, what they can pick out of a lineup of Google images and say “That's AI” or not.
And it's true. If you cannot trust the internet for truth, you're going to break something really fundamentally required in the internet. Karen, I don't know how old you are, but there was a time pre-Internet where people would be suspicious of making purchases online, right? This is before Stripe and SSL encryption, and it was just so normal to save your passwords and have two factor authentication. There's been so many systems that we take for granted now have been put in place brick by brick, step by step, cable by cable, to create the internet we know today, where we will talk to someone on the other side of the world and say, "I trust you" and send them money.
Now the scam industry is massive, but it's built off the fact that the internet is ultimately trustworthy. AI could very well break that 'cause you no longer know if you're speaking to a human or not. You no longer know if you can trust the information you're getting. You don't know what's true and what's not.
So what happens to us then as humans? What happens when we go, "Okay, I'm going to go and trust my AI doctor, even if AI is wrong, over the doctor who has trained for 20 years"? There's so many scenarios to this and there are no brakes and regulation on it whatsoever.
So you've got two very fundamental questions here. What is the value of human labor? What's the value versus humanity? But also, what's the value of AI? And at the moment, the only way I'm seeing any of these AI businesses turning a profit is to destroy an entire industry. Because salami-slicing it, like they've got at the moment, won't get them back the profits that they have promised for Silicon Valley. They've spent half a trillion dollars on this, and half a trillion dollars — they're not going to do it by selling 15 pound subscriptions on ChatGPT. They have got to transform the entire industry.
So the only ones I can see they'll go after at the moment: creative, 'cause it doesn't require accuracy. Obviously war tech, you're seeing drones now being replaced and used at massive scale. And data and biosecurity, although that's going to be an arms race, whatever happens. And medical. Those are the key ones.
So again, I'd ask you how much do you trust them? Because I don't trust them at all. Not the people in charge. So you're suddenly going, "Is it right?" We shouldn’t let AI and the algorithm choose who lives and who dies. Both in civil society on a daily basis, because this is what we're going to be looking at in the long run, but also in war. And war's moment is a lot closer for most of us than it has been in the past 50 years. (That, by the way, is just bloody normal politicians, not 'cause of AI; forget conspiracy theories.)
But what I'm saying is that in order to recruit the mass investments to be made in the pattern matching algorithm, they've got to go after the big industries where the money is. And yeah, even ads isn't going to cut it at this stage. There's just not enough money in it. So yes, Karen has now been quiet and I think I've monologued her into a complete silence. She's looking stunned, yes.
Karen: Oh, I'm just trying not to interrupt. You're on a roll! I try not to interrupt.
Natalie: Yeah, so I'm a mother, I'm a writer, I'm a marketer and this is what I'm seeing from what I'm reading and researching. I think we're way off a general intelligence, because the general intelligence will require a frame of reference I have yet to see. It could be coming.
Karen: Yeah. AGI is not here.
Natalie: We're not there yet. All the predictions so far have always been five years off. Five years off. At some point, it won't be five years off, it'll be today. But I think they're a little bit optimistic at the moment. But to be honest, even if we only have what we've got now and all the research stops, we're still in massive trouble 'cause we've got no way of regulating it. 'Cause it's going to require the ultimate regulation, and that is how much do we trust ourselves? That's on the theoretical level, right? That's on the philosophical level. There's a massive arms rate going on. No one wants to stop it. Nope, everyone is scared, especially politicians put barriers on it. They're like, "Oh my God, if we do that, if we stop, if we go, hell, we need to have some copyright. Heck, we need some guardrails. We need to say this AI, human, we need to know where our data's coming from."
I haven't even talked about industry bias here. But what happens, for example, you implant AI into police searches and there's inherent bias against a certain ethnicity or race? Women, for example, are already badly under-served in medicine. Can you imagine how much worse it could get if we turn all the research over to AI with inherent male-centric bias? 'cause I can.
But even putting aside the philosophical problems, there's the issue of what's happening now to us in our daily lives. As said, Silicon Valley and AI in general, especially American companies, are actually desperate to get everyone using AI. So what I'm seeing as a solopreneur is an arms race, with them saying, "You've got to use AI because everyone else is using it." So it claims back your time.
Now, to some extent, what they're saying is true. Because AI is the next step forward in turnkey solutions. Used to be you could set up an automation, or run a bit of code, and that would do your job. AI does exactly the same thing in seconds. It just does it on a much broader scope. So you can create bigger systems where you never could before. And that in turn frees up your systems time to do creative original thinking.
So my optimism in the long run for business: the bad businesses will use AI and become cookie cutters. The good businesses will use it and unleash their people. Because copywriters are still being hired. Graphic designers are still being hired. Just because you use AI doesn't mean you have a good eye. Just because you use AI doesn't mean you've got a good grasp of psychology. And in fact, that's going to become more important when everyone uses AI and they all sound the bloody same. And I'll explain why that's a problem in a minute.
But for example, if I create, say, a piece of video marketing and I explained, for example, "This is how you'll set up, X, Y, Z." Just to explain to everyone, I do email marketing, so you know, it's one-to-one mass touch communication, but nobody wants to be treated like a number. So what I use is my database and my knowledge to pull together an email where I look at you and go "I can see you bought this. I can see you've done that. You're not going to be interested in this. That would be disrespectful. I think you might like this." And AI can help me in those selections, a bit like going to a shop and having your own personal shopper. So yes, the trick, as I said, is being respectful and not creepy and keeping that human connection. So that's where you blend the two together.
Going back to the little video I've made, I'm going, "Great. I've got this little video. There's only one of me. There's only so many hours in a day and I've got 14 clients. Therefore, I'm going to take this video, and I can turn it into a blog post. I can turn it into some social content." There's far too many social things out there. I can tell AI, "Take these points and make me a checklist". Make something which people can reuse and read and do alongside it. "Hey, AI, can you make me a course?"
Now, the thing is, none of these things are actually that good. They're just the first draft, a possible draft. You can train AI in your voice, but it's still a draft. Doesn't have humor, no reflection, no voice. Now I understand that people are taking its level and include including that, but I personally prefer to read it over and edit it. Because again, it lies. It tries to put stuff in that you never said, or to quote statistics and cite facts, because according to AI, that's what everyone else does on the internet. You have to edit it. You have to oversee it. What it does, it gives you a possible first draft and it cuts down the time. But it's still very telling that I set AI in a race with all the prompts against my human assistant, and she still came out on top, because AI stuff they produced was crap.
Again, it goes back to frame of reference. It had all the same stuff she had. It had all access to the same color palettes, the same images, and yet the end result was awful. Because she had the one thing it didn't. And that was outside frame of references that she could pull in that were unique to her.
I did that, by the way, not 'cause I was being cruel or because I wanted to replace her, but to show that she was irreplaceable. That there were things I value from her that I could not get from AI. Not least because she sits there, makes some cups of tea, and tells me to get over myself. AI doesn't do that. It's way too polite.
So it saves time. It can be a massive shortcut. The problem is when you use it as a set for thinking, which goes back to the whole conundrum, is what makes us human. And at what point can AI replace us or do a close imitation enough to be human? And I don't have the answer for that. I think that's going a little bit beyond my scope of this subject. Lemme know if you find it.
Karen: You've wandered into some of the topics that we have in the later questions. Question number two is, what is your level of experience with AI and machine learning and analytics, and have you used it professionally or personally and studied it? So you've talked a little bit about using it for this mass personalization. Maybe talk a little bit more about that?
Natalie: Yes, I can. I say I use AI in a daily basis in small and large ways. As I said, it gets thrust down the throats. And it really ticks me off, for example, that if you buy something like Google Workspace, AI is included. Grammarly, AI is included. MailChimp, AI's included. So it's very hard to avoid it now. It's a bit like plastic. They're trying to make it as ubiquitous as plastic. And yes, that does tick me off, that I don't seem to have a choice anymore on that.
However, how much you use those individual components is up to you. So I tend to use that as turnkey and automation in my professional life. And that's where I have control of the process. It is a process, but it's not something which I turn over the entire strategy to. And trust me, I've picked up clients off the floor who did try the entire strategy and then wondered why it didn't work.
The other problem you have with AI, and it's something I have yet to see anyone have an answer for, is what's known as slop. I'm trying to think of a nice way of saying this, which will be okay to American ears. You've heard of the phrase "Garbage in, garbage out"?
Karen: Sure.
Natalie: Okay, this is the pollution of the internet. That's the nicest way I can say it without actually swearing, okay? AI is polluting the internet in a big way. Because every day, hundreds of thousand people are churning out blogs, SEO, what have you, with the aid of AI. GPTs are trained on what the open sources are of the internet, and talking about trillions upon trillions of words. Absolutely everything. I think at this point, one of the estimates I've read is 75% of the Internet's already been consumed. And one of the reasons why Musk is going after all the data in the American data sets, and why they've gone after all the power to copyright all the books, is 'cause they're running outta data training. If they want AI to have a unique voice or doing more, they need to find more data. And they're just running out.
And this leads to the next problem, which is AI doesn't really have a memory. Not unless you do your own personal chat bot or data bot, and you tell it to have a memory, and you store that separately as a cache, in effect training as a mini system, an ecosystem in its own right. But the major ecosystem that the majority of the players rely on, for every reason — that's where the powerful stuff is — that's drawn from the internet today. The internet that the past 18 months has been churning out AI. So AI is eating itself.
So what you're now getting is more and more dumbed down work, where everyone is basically going towards the same common denominator. You look at LinkedIn, you can see it. You look at Instagram, you can see the same trends going on there. It's mummification on a massive scale.
So anyone who wants to stand out, the best bet is to rewrite all that damn AI and do something different. Just think Human, which we can connect with, 'cause we still do that better than AI. We better well should. However, that's my frustration as a marketer. If you don't stand out, if you're all samey, the first question is why would we hire you? 'cause you clearly used AI to write all of this, so there's no point.
Karen: Yeah, exactly. And there's a lot of terms around this. There's one that refers to 'enshittification', but I think that has more to do with the processes for how companies start out. They have noble purposes and they are meaning to do well. But then they start to take advantage of their customers, and then they start targeting other ways of making money, and then the whole thing goes down the tubes and loses any quality that it had to start with.
But I have read about the consumption of data and needing more data. There's so much AI-generated content out there, which isn't necessarily labeled or tagged as such. So it's hard to avoid, even for the tools that are starving for new content that they can hoover up and use for training. Eventually they will start consuming it.
And taking the content of creators and not compensating them, and not crediting them, and destroying their livelihoods: if you make it so that people can't make a living creating, then they will stop sharing their creations where they can be used for those purposes. Like you said, it's eating itself. It's like the snake eating its own tail.
Natalie: Exactly. Exactly that. So I think that there will come an accommodation. I don't know what it will be, but as you correctly said, there will come a point where we get bored, in fact. And I think we're almost already there. We get bored of polished TV productions. We'll get bored of AI creations. The smoothest thing. And that's happened on CGI video. We want raw, visceral, something that we can connect with.
I find it very telling that all the major streaming channels are losing market share to YouTube, where people — and my kids are doing this a lot — they're going online, watching other kids talking about their stuff. Unboxing, playing, just doing normal kid stuff. Then I chase 'em off, of course, and they'll go outside. But that's what they want. They're sitting there looking at little Minecraft stories being made up by the kids. And for me, that's fundamental.
Going back to Substack, humans have always been story-led creatures. We can't resist it. We can't stop doing it. Which makes it very sad, for me, that we don't have a way of distinguishing between the AI, the computer, and the human. As I said, putting a higher value on what we create ourselves.
Karen: Yeah. And there are a couple of initiatives. There's one called the Content Authenticity Initiative that I think Adobe has been working on, as part of a consortium. And so there are some activities to try to make it clear what is AI-generated, or where it's been at least involved to some extent.
But on the other hand, as you said, it's getting to be like plastic. It's everywhere. And at some point there probably won't be anything that wasn't in some way touched by an AI-based tool.
Natalie: You've actually triggered or thought to me: there now being a push-back against plastic, in the same way, seeing push-back against oil and gas. And it wouldn't surprise me if in the future there will be cachet in having everything AI-free. For example, I'm writing AI-free. I'm very explicit on that. I stopped using AI-generated pictures 'cause I didn't like them. I do use AI turnkey solutions, as I said, in my professional work, but I'm very blatant about that. Everything I write, however, is still from me. And I think you're going to see a lot of people move back towards that in the future, because it's the only way to make themselves stand out.
Karen: Yeah, the Authors Guild just put out something recently where they are offering their authors an opportunity to self-certify that they did not use AI, other than trivial things like spelling correction and such, in creating their works. And I don't know if you've seen Beth Spencer on Substack. She has this "Created With Human Intelligence" initiative. I use one of her badges online, because I made the same decision more than a year ago when I started writing on Substack. For me, thinking and writing are very much intertwined, and I don't want to give up my thinking to anyone else.
Natalie: Yeah.
Karen: And I have the same feelings about AI-generated art and such, and music.
Natalie: Yeah. Yeah. I mean it is a tricky one, art, because as I've said, I have seen artists do some amazing things with AI. But the question is when to stop being a tool and start taking over the vision. And that for me is the key point. Because we stop thinking what should go away, what the placement is when you let AI make that decision. I don't think you're really in control anymore as an artist. Who's got control?
Karen: Exactly. Yeah. An interview that's coming out this week, I interviewed someone named . She's also based in the UK. And one of her points was that it's not so much deciding what to use AI for, it's deciding when to stop — knowing when your human intelligence has to take over your wetware, right? And where to draw that line and how to make those decisions. We talked a lot about that. That was a fun conversation.
But I want to go back to, you talked about using AI as part of your customization work. Are there any other tools that you use with AI as part of maybe your personal toolkit or anything like that?
Natalie: Okay. AI is really good for mining conversations and data within a very specific tool set. For example , because I'm partially deaf, full disclosure here, I tend to record most of the conversations with people's consent, and afterwards I'll go over that. Now, at this moment in time, I have over a hundred hours worth this year alone of meetings. There's no way I can go through all of that as a human being by my own right. But I can get AI to go through and pick up, for example, "How many times has this question been asked?” ”What are the most popular questions that get asked?"
So for things like research and specific subset that is relevant, and it's your own customers, it can be bloody amazing. To do custom research, basically it's one step further than your standard Google search. I said on a specific subset of data. So knowing what people are asking, what their pain points are, extracting when they're most happy, tone of voice, you can do all of that.
And from there you can create some really good products, copy, communications, and even reach out to them and do initial surveys based on that. So I'm using it as, basically, it's a dumb assistant for me at the moment. That's the best way to describe it, because my human system's got better things to do. This is the sort of grunt work you would in the past, maybe hand off to a work experience child, 18-year-old.
In fact, that's work I did when I was 18. But these days I would much rather my 18-year-old will sit in there getting some coding experience or learning how to actually write properly. Or doing proper market research, right? "You've got this, now I need a report. Tell me what I need to know and why." Strategic thinking, 'cause one thing that scares me about the people that are coming out of university from schools, if they've used ChatGPT as the shortcut, how much thinking have they actually done? Because at some point you're going to have to sit down and, instead of cheating your way through it, produce your findings. You have to make this work in the real world, and you don't necessarily have your little computer backup to make it work. Because if something breaks, ChatGPT won't be able to tell you necessarily what broke.
Karen: Exactly. That's a good point. I've heard different perspectives on meeting summarization. Some people have reported that they'll have it record a meeting that they were on. And the summary just didn't capture some of the really important things that a human who was paying attention would've caught. So they have learned not to trust them.
I just interviewed somebody earlier today who had that perspective on meeting summaries. But at the same time, we're on Zoom. I'm using the captions myself because it helps me to make sure I understand people's words. Your Fireflies note taker joined the first time I started up the call, so you obviously have been using that. So there are times where just having the tools live and capturing things afterwards is really useful. I think a lot of people can see value in those types of tools, but also realizing that, I'll see words come up here like ChatGPT. One of my transcription tools always gets ChatGPT wrong. You'd think it would've learned that one by now! S o they're definitely not perfect, but they can be useful in some of those areas.
Natalie: They can, they've been absolute lifeline for people with disabilities. My business would not have taken off in the way it had if I couldn't use Zoom and captions, or to note take. I'm sitting here lip-reading you at this moment in time, Karen, with captions on underneath, and that makes this conversation possible.
Karen: Yeah.
Natalie: And just for the record, I still make, and I'll actually show it here first, this is one of the things I had earlier. I still make physical notes because the act of writing things down cements it in my brain.
Karen: Yes. Yes, I've done that too.
Natalie: Don't discount the old tools. Just find a new way to use them with the new.
Karen: Yeah, it's not in focus, but you can see I take notes too, little paper notes. If I tried to type in my notes while I was on a call, the typing keyboard noises are very disruptive so writing with a pen is quiet and quicker.
Natalie: Yeah. One of my favorite things is, this is a treat I sometimes do, say on a Sunday morning, I'll go along to my favorite cafe. I'll have my notebook, I'll have a coffee. I don't even take my phone with me. I just sit there for an hour, a blissful hour. And I doodle, and I scratch, and that's how plot ideas come out. That's how new ideas for my business or my writing come out. It's just me, a pen, and a coffee.
Karen: Very cool. You mentioned that you've avoided using AI for things like images because you're not comfortable with them. Can you talk a little bit about that and what you do instead?
Natalie: I'm sure people say, "Oh, I use it all the time. It's amazing." But there's two problems. The first one is, I feel that you have the 'uncanny valley' problem. I used it, for example, in fantasy art because it's so hard to find stock images of fantasy art, and at the end I thought, "No, this isn't working." So I've gone back to using composite creations of my own in Canva, where you take certain things, you put them together, some stock photos, images, graphics. And that's actually worked better, even though it didn't feel as fluid and it didn't look in some ways as polished. But people responded more to it, and I felt more comfortable with it. So that just might be me being Luddite. I'll say that out loud now. I am older. I'm more resistant to this. I'm trying to keep an open mind, but I can also see the dangers.
The second problem I've got is: I don't want the internet to become this polished reflection of a world that doesn't exist. That's really hard to explain, these days celebrities walking around the Botox heads, their clean white teeth, even people use filters all the time now on the internet, and glamor-pusses, and you've got these creative streams going on Facebook with the memes. And you've got Instagram of course, and TikTok with these 30-second moments. And that's not real life. What you're seeing right now with me with my slight off colors and my squinty eyes and my slightly shiny head, that's real life. And I think it's important, somehow, that we reflect that, 'cause it's, again, going back to the idea of honesty and honor.
And yes, one of the weirdest things about me, I've just painted my walls. I have actually usually, on my wall, a little sticky note which says what I stand for, which stoic virtues of value, honesty, honor, and courage. And I think that's more important today than ever before.
But going back to that, ChatGPT doesn't have that. So how do you display it? And for me, it's doing what I said I would do. Writing in the way that's me, even if it takes longer, even if it's slower. And using images that I think are honest.
The other downside, of course, ChatGPT is the convenience, or inconvenience shall we say, especially when it comes in this generation. 'cause you can go to a night cafe, you can put in the prompt, and whatever you come back with, it won't be what you asked for, and you will spend a lot of time prompting it very specifically to go, "No, I want a tiger cub. No, not a tiger with that. Not a tiger with a club. No, I don't want a tiger on a dinner plate. No, please go back to the original." "Okay, we're going to start again. I just want you to alter this little bit." and watch it take the dark storm clouds out . And there's some things that it'll never do at all. For example, I have yet to see ChatGPT accurately manage to portray a picture of a left-hand person writing. It won't do it!
Karen: Oh, really? Oh, interesting.
Natalie: Yeah. Yeah. Basically it just depicts someone writing in reverse and mirror image as a right-hander. But if you've ever seen a left-hander write, you know that's not possible because it smudges the ink. So all us lefties know, and I'm one of them, you either write underhand, so the pen is sticking up from the hand, or you write overhand, so the hand curls around, which means that you protect the writing. Even then, you still like to smush the damn thing. Yeah I tell you sometimes I wish I was born Chinese. 'cause at least that way, you only have to go up and down the page. However, western handwriting is a curse if you're a lefty. But that is one example of the reference to real life versus fake life. I said, it always comes down to where we draw the line, which you said quite accurately earlier.
Karen: That's a good observation. And as far as your concerns about using images and the inefficiency, I've heard that from a lot of my interview guests. They feel likewise. So you're in a lot of good company.
And I've also been seeing some posts on Substack, just people saying, "Hey, did you know, here's a great source like Library of Congress and all these other places." I think the Smithsonian is one of them. I've got a whole list that I link to this ethical shoestring page that I keep on my site because I want to keep track of all the places where you can find ethically sourced images to use. And sometimes it's just giving 'em credit. Why would you not give credit to somebody? So those are some great sources to use.
Natalie: There's another thing as well I'm doing is: I'm working with an artist, an Argentinian artist actually, for a serial hour. And I've got to brief him, actually, on my latest character. But I ask him to do pen and ink etches, and yes, it costs money. And yes, it takes time. But you end up with something really exciting.
Karen: And unique, right?
Natalie: It's unique. It is totally unique. And they're responsive, and it's so much easier than trying to prompt, frankly, because you're having a back and forth conversation with someone that gets it.
Karen: Yes. That's awesome.
Natalie: If we're not here to do human to human connections, what the hell are we here for, Karen?
Karen: Great questions. Yep. I agree. Yep.
Natalie: Yeah, all this sounds like I'm really digging on ChatGPT, but I think it's important at this stage to define our personal line in the sand. I'm not saying everyone should be like this. And I said it's got a lot of utility use and it is going to change our world. The question is, and we are in control here and always have been, how far do we want to change our world?
Karen: Natalie, we've talked a lot about where companies are getting their data and their content for training their tools. I'm wondering if you could just recap how you feel about companies that use the data and content, and whether they should be required to get consent from and give credit and compensate the people whose data they want to use for training.
Natalie: Yes. We need to have a framework. it's outright theft at the end of day,. Karen. If I were to go to your Substack right now and just copy everything over and then rearrange the words, you would rightfully be angry because that's your work. That's your time, that's your thinking I've just stolen, okay? And it doesn't matter that you might have just rephrased everything that happened a hundred times over and you've reused stuff from other authors. It's still the case that I have stolen your work.
And as you've correctly said to us before, earlier in this interview, that if no one gets paid or valued for their creative input, we are eventually going to run out of it on the internet. And yes, people like me will continue to be hobbyists that write, but we will paywall it. And I feel very strongly about this.
The other thing as well, as I said, comes down to a matter of ethics. Now I'm starting to see that already ChatGPT clearly doesn't give a damn. Meta really doesn't give a damn. I have no idea what Twitter's going to do, but that's just a falling binfire at this stage. Then you've got the few which are clearly still trying to give a damn, although they're still chasing the profits. And I think part of this does come down to late stage capitalism, which is a whole other topic I won't go into.
We've got Perplexity. Perplexity is trying to be a little bit more ethical in that it always cites its sources. So it is going after the research market basically. And to be fair to it, it's very good. It's a lot more accurate. It's also, how can I describe it? It's grown-up, but it's also a bit duller than ChatGPT, if that makes sense, because it's more grown-up.
And then you have Claude. Claude is trying its best, I think. And as I said, I can't pronounce his name, Anthropic should try and not be ChatGPT, and try and not be Google.
Google by the way, is definitely embracing the whole evil ethos vibe. It should really rename itself Satan, I think, and just go for it. If you're going to go do the whole, let's thrust it down our users' throats, you might as well just embrace the whole vibe, like start wearing black and stroke a cat. "Dr. Evil, your name is on the door."
So yes, companies are doing this and embracing it, with the aim of efficiency, but it's not asking cost at the same time. There is a cost, both to these companies as well as to us. Some of it's terror. You look at OpenAI and you can see the CEO is clearly terrified. He's made all these promises and so far he hasn't delivered. Still haven't seen, have you seen ChatGPT 5? I haven't. And AGI is definitely not here to play, even though it was predicted about three years ago that they'd have it.
Karen: Yeah, I don't know if you follow on Substack or anywhere else, but he's been advocating for quite a while that we are not going to get to AGI by making bigger LLMs and feeding them more data. That is just the wrong path to try to get there and we should be focusing elsewhere. There's so many aspects to this, but yeah, that's one. Meta and the way that they stole so much content. And even Perplexity, they're giving credit and that is good, but they're still working with content that was stolen in the first place.
And there's a very small handful of, I think it's still less than 20, companies that have been certified as Fairly Trained, which means basically that they used ethically sourced data. And that's even just one small part of ethical behavior. There's also the aspects of: How did you get that data labeled? Did you exploit laborers in Africa in order to do that? there's just so many other dimensions of ethics that just using ethical data is a low bar.
Natalie: It's really hard to tell. You don't know, for example, what does Grammarly use? What's the data set Grammarly trains on? And yet most of us will say that we've come across that sometime or another. But yes, I think in the future, almost like we have fair trade chocolate and free range hens and ethical eggs, there will be ethical AI.
Karen: You had mentioned earlier about the value of human, AI-free things coming back with art. I spoke with someone recently who was talking about how much he hates what they've done to music. And so now he goes out of his way to attend live music concerts and buy things directly from the musicians at those concerts, just to try to put his money where his values are. And I thought that was cool.
Natalie: Yeah. Yeah. And that's one of the things which the internet has done well. I've worked with some musicians, some amazing people in the hop and blues industry, and I'm on the list of several more, big stars and small, when they go around the countries. And what I will say is an email list at this stage means that they're not dependent on Spotify. They're not dependent on social media algorithms to use them, of course. But it means that they connect straight to the fans. And you are seeing a lot more peer-to-peer connection, as you said, both live in music YouTube live streams — they're still a thing — and yes, private events.
And the other thing I've noticed, at least here in the UK, which makes us a much smaller, denser country than yours, is Street Band acts are coming back. Now, most artists really struggle, at least on the big touring stages, because the industry has been depressed a long time. There's only so much you can charge on tickets, also strong cost of living. Unless you're Taylor Swift or Beyonce, it's really hard to break even.
What you're seeing though, are the secondary bands, the smaller touring bands who are just doing it for the love of the craft. And they're taking over the village for the night and it's packed. It's absolutely packed because people want that connection. Sometimes it isn't particularly good, but you've still got people sitting there dancing, laughing, because there's nothing like that. And it's not polished. It's the opposite of polished.
Karen: Yeah. Genuine experiences. There's really no substitute for that.
Natalie: I agree with you. Yeah. No substitute.
Karen: I think we've talked pretty much about the tools and where they get their data. You mentioned that you work with building some AI-based tools or systems. Where do you get the data that you use for working with AI?
Natalie: Only from the company's records. It's a very ethical subset and we use it for very distinct purposes. That is, to interrogate the data to find out, for example, why people are leaving us. Where is customer dissatisfaction? What's going on with our drop in open rates? So it's not within the wider AI system. It's not within the internet itself. It's basically on the reporting side, on the flow side, the analytics. It's a very specific AI trained dataset we use in most cases. And it's unique to each company, because it has been trained on their data and it does have to have a memory.
However, I will say it's still pattern matching and it'll certainly do what you ask it to do. But what it does do is it saves time by pulling together a lot of statistics very quickly. You can then compare and analyze and go "That's where the problem is". So it's a diagnostics tool.
Karen: Yeah, that's great. We talk a lot about the large language models, but there's some of what people call small language models, which are very specific to a company, like trained on that company's personnel manuals, or their maintenance manuals for the equipment that they manufacture. Being able to answer questions about that, those seem like some of the more useful and more ethical applications.
Natalie: Yeah. Yeah, that's where you are offering more service. Although I have to say, you have to be careful. You can do trained chatbots, and I've seen ones that are really good. You literally can create a brand voice around a chatbot if you're careful with it. And obviously you've got to be very upfront with the consumer that's the case. Nevertheless, if you do it badly, it's equivalent as 1980s, hellhole-ish phone lines that you'd always be on. “Press 1 for this option, press 2 for this, press 3 for that.” I can see it sending a shiver down your spine. Just saying that there's a reason why.
Karen: I still hate those systems.
Natalie: Exactly. So you've gotta be careful that AI doesn't go that way. In fact, in some cases it has. Back in the 1990s we had the automated telephone systems. In the 2000s, it was offshoring. 2010s, it was onshoring, because we suddenly realized that was inefficient. I think AI is just the latest trend in that respect. It's always the company looking for the next technological leap or efficiency.
Karen: You probably heard about Klarna, the company that announced that they were going to replace their entire customer support team with AI bots. This was maybe six months ago or so. And there was just a recent announcement saying, "Yeah, we're going to hire some humans back." [link]
Natalie: Interesting. Yes.
Karen: It didn't take that long.
Natalie: Yeah. Yeah. I said there are very definite limits to AI. And the more you work with it in whatever capacity, and the more you ask of it, the more you see those limits. And the biggest one for me is I don't trust it. It's a very useful, but very dumb system.
Karen: So as consumers and just members of the public, on an individual level, our personal data and content has probably been used by an AI-based tool or system. Do you know of any specific cases that have affected you? Obviously without disclosing any personal information.
Natalie: I do know my website and blog has been searchable, and have certainly been used by a competitor. Because that AI prompt's now out there, which is "Go and find your competitors and make it into something I can use", and that to a certain extent does tick me off. But to be honest, what I sell is myself as much as the tools and the techniques and what have you. Because most people can learn those things, how I did it. I don't have a gated moat around what I teach. It's how I teach it, which is very human to human, very person-centric, and very personal to the companies I work with.
And yes, I'm in the process of selling courses, but again, that comes with a human component. So that's where my motives, and that's not something AI can take away, replicate at this stage. Maybe in the future. I certainly wouldn't say I'm not replaceable. It could well be, my entire industry is going to be replaceable. We'll see. But at the end of the day, if you are going to get to the point where your entire business is run by AI and you don't have any humans in there apart from yourself, the question is what makes you irreplaceable?
Karen: Excellent question!
Natalie: And why should I buy from you instead of getting AI bot to do it myself?
Karen: Excellent question. Yeah. So I'm curious, on your Substack newsletter where you write your fiction, there's an option that Substack gives us to turn off AI training. Do you have it turned off on yours?
Natalie: Yes I do. I do. Partly 'cause I think it's my duty not to contribute to the slop out there. And it is pure fiction. What I'm writing at the moment. I'm still improving and working at my craft. And I feel very strongly that artists should be remunerated and not going to take part in that side of things if I can avoid it, at least. I said I don't use fiction for. So these are my own personal ethical guidelines, okay? I use AI as a turnkey solution. I use it to create systems. I use it on my own work, which I think is ethical. And I use it only for interrogation of customer stuff, where that, again, is ethical. So for example, I wouldn't put words in the customer's mouth through AI.
But for me those will go back to who I'm as a person. And I want to be trustworthy, honest, valuable, and courageous. And that means occasionally standing up and going, "This is my line in the sand." Yes, I know a lot of my competitors are stealing an advantage by getting AI to turn everything out. But in the long run, and I'm in this for the long run, they're going to replace themselves, because they're going to sound like everyone else.
Karen: Exactly. And someone pointed out to me once on an interview that people stealing other people's content — for, say, a LinkedIn post — that's not new with AI. People have actually been doing that for years, just stealing somebody else's post and posting it like they wrote it themselves, and not even tagging the other person or crediting them. So people behaving unethically is not new, and I am sure it's not going to necessarily stop any time soon. It's just that AI gave them a different tool to do it more pervasively than they were able to do it before.
Natalie: Yeah. It is.
Karen: As you said, it's not the content, it's the way you teach it that can't be stolen.
Natalie: That's correct. And it's also the case of trust, as I said at the beginning, always comes back down to trust. Trust. If I'm sitting there meeting your website, which are clearly based on AI algorithms, which you’ve stolen from somewhere else,
A, what value are you giving to me? and
B, why should I trust you?
Karen: Yeah. There are some people that have said that if they see an AI-generated image on a post, that they won't read it because they assume that the text is also AI-generated. And if it's going to be something that came from everybody else, why should I bother reading it? Now, other people have said that the images are separate and, even if they write the entire article themselves, they just struggle to find images.
So I've heard different opinions about that. Do you have a personal philosophy about, if you see an article with an AI-generated image, do you skip over it, or will you actually click in and give it a try?
Natalie: On Substack, I'll give the author the benefit of the doubt. Or try to, okay, 'cause I want to support artists where possible. And again, it would be a very bleak, cold world if we don't have our graphic designers or artists and beautiful sketch writers. At the same time, I know that time is short. It can be incredibly hard to generate those images. And if you're just a blogging hobbyist, whereas most of us are, and you need something which makes you stand out because there's so many blogs and so many good stories out there: I get why do it, I really do.
So it comes down to, I'll read that. I will give that author the benefit of the doubt. I'll read the story and I'll try and find out more about them. But I've got a very finite amount of time, only 24 hours in the day, and I want to spend it on stuff that makes my heart sing. And so far to date, that has been human-centric stuff.
Karen: That's a great summary. Thank you.
Natalie: Thank you. As I said, I think AI's got space. But what do you think? What do you think that the whole world of AI is going to go? How do you think it's going to change us?
Karen: It's funny, you mentioned a few times not knowing how old I am, so: I'm definitely older than you, since I've been around since the even earlier days of before the internet was a thing! And so if you look back at 30 or 40 years ago before the internet came on, or before mobile phones became a thing, could we have imagined how much they would end up impacting our lives and in what ways? There were probably a few visionaries who could have imagined it, but for the most part, it's beyond what we can think of.
And I think AI is probably going to be the same way — that 20 or 30 years from now, it's going to be just unimaginably different than what we have today and what we've had in the past. So I don't speculate into that area too much. I do like to read visionaries, like Alan Turing wrote something years and years ago that is surprisingly relevant now. There are some people that have had that kind of vision. But I think, most of us, it's just we have to expect that things are going to change. And I'm personally motivated to try to make sure that it changes in a way that feels honest and ethical to me.
That's one of the reasons that I think sharing stories like yours are important because people need to understand, how does this affect someone who's running a marketing personalization business and writing fiction? And it's a mixed bag. There are some pluses, there are some definite minuses. And we all have some degree of choice, I think, that we can make, about how we use it, how we interact with it ,and where we put our money. It's not as much control as we would like, but we can keep pushing for more control.
There's actually a bill, I think in the UK right now that's under really heavy discussion. There's a Baroness who's been getting up and talking, right, about how things are and trying to make the case.
Natalie: The data rights bill. It's going back and forth between the lords and the commons at the moment, and it's really intense. A lot of our celebrities, songwriters, artists, musicians, what have you — they've been to parliament, they've written open letters. This has been debated very publicly at the moment. Now, for most people, it goes whoosh over the heads. But you aren't the average person. I spoke to my father-in-law the other day and he said things like, "Of course we shouldn't steal the livelihood". The basic feeling there, at least in Britain, is that there shouldn't be free reign. People do something, they expect to get paid. There should be an element of fairness. It should not be the ‘wild west’.
But like I said, we're British and we've been fighting back against this. This has a long cultural lineage for us. Right back to the industrial ages, back to Victorian age, when you had people with the batteries, and the slum workers, and being packed 12 to a room, and no healthcare, and the chemists sending these poison sweets. And there's no clean air to breathe, and there's no clean water to drink. And we've had to fight every step of the way for those privileges back from the Industrial Revolution. This is just the latest one. This is the pollution of data. We're going to have to fight for that cleanliness as well.
Karen: That's a great point. Us humans, we have not done a very good job of stewarding the resources that we've been blessed with. AI pollution is, I think, a good way to think about it. We've polluted our earth environment, our air, and our water, and now we're polluting our internet, our data.
Natalie: And we're just doing it out of greed. And that's the saddest thing. It's got the ability to be amazing, but if we don't get this right, it won't be. However, I said I'm an optimist. We can step away from the internet. There are things out there beyond Twitter yeah.
Karen: Yeah, I left Twitter months ago. I just couldn't really take it anymore. I was really happy to find Bluesky. I don't spend much time there, though. I spend most of my time on Substack nowadays. Especially for writers, it feels like it's just a better environment. It's like the old days of Twitter and some of the early sites where people actually were nice and supportive and responded like humans. There's some bots creeping in there, but
Natalie: Yeah. Yeah. That it happens though. It is the ecosystem, everything decays in the end. But keep moving on. Keep the faith. I think, and yes. Know the limits of AI. I don't think it's going to come for all our jobs, by the way, at least not in the next five years. Not if any of the current billionaires want to keep the wealth. Do you want to keep going? That is a whole new can of worms.
Karen: That is a whole new can of worms! We maybe need to save it for a follow up call. We talked a lot about how we can't really trust these companies and you don't feel that you can trust them. What would be the one thing that you would want them to do first to try to earn your trust? What would be a good first step for them to take?
Natalie: An AI copyright tool? Basically at timestamp saying this has been produced by AI, so you can make an informed decision.
Karen: Okay, so traceability, where it came from and how it was created, yeah.
Natalie: It's very telling at the moment that the companies go "Look, you can't tell this is AI." And I'm sitting there going, "Surely if you're so proud of your product, you should be able to tell it's AI."
Karen: Yes.
Natalie: You know that for me says everything. Why do you have to start with a lie? Why are you trying to imitate humanity? If anything, we're thinking too small.
Karen: Yeah. Yeah. One of my principles I set up early on was that I think everybody should be transparent about, individually, “Here's what I use AI for. Here's what I don't use it for. Here's how I use it.” And maybe even, “This is why.” I put up my own AI usage policy last year when I first started writing.
I always wonder when someone's hesitant to say "Yes, I use AI for this." If you're ashamed of it, maybe reflect on why you're ashamed of using it? In some cases it might be that people attach stigma to it. For instance, some people may say, "Oh, you should never use automated closed captioning tools". There are good reasons to use it, and that can be a very ethical decision. So I'm not here to judge somebody for that. But there are other cases where it's "Yeah, you know what, I want to make some money, and I have a way to generate images and sell T-shirts with the pictures on them." And okay, there are some people that aren't in a position to make a living some other way. But at the same time, it is exploiting the people whose works are going into the pictures or putting on their T-shirts. If nothing else, I think maybe it would just help people to reflect on what they're using and why and whether it's really fair and in accordance with the way they want to live their lives and the principles that they will say they believe in. I think reflecting on that is a useful exercise for pretty much anybody.
Natalie: This is actually a very good story I'd like to make the end on note. This reminds me of email marketing back in its early days, actually on a much bigger scale. So go back to turn the century and you can email anyone. You can mass spam them, right? You could buy a list of a hundred thousand names and just send out an email, okay, saying "Buy my stuff". Fast forward to say that's not possible anymore. You can still buy the list. But two problems come in.
The first one being is that you have got to say under law, in America and in Europe, in India, in Asia, 'cause China's got it as well, and obviously over in Australia, all the laws pretty much align now and fairly much the same. You've got to say that you've got that person's consent to email them, business to consumer, okay? You cannot spam them. You've got to give them a way to unsubscribe. If you fail to follow those laws, you can be fined, really badly fined.
To my mind it's a no-brainer, if you are emailing someone who has not consented to it, then chances of them buying are really low. Incredibly low. I'd much rather have a small, tiny, small, very active list of people that really want to be there, which goes back to Substack, than I would do with a massive list of people who'd never heard of me and who don't want to buy, and who are rightly pissed off that I've gotten into their inbox.
And fifthly, and this is the key killer thing that has been for the industry and the reason why now it's cleaned up so much: the spammers came in and ruined email in a big way. So it wasn't even the ethical companies or even the standard companies that were doing the gray hat stuff, it's because the spammers were overwhelming Microsoft Outlook, Gmail, Yahoo, AOL, the old granddaddies of the internet. And so they basically came together and said, "We can't have this any more. If someone's not white listed, they're not getting in. If they have a higher spam rate than we think, we're just going to block them to the entire platform.” The email marketers had to get wise very quickly to how clean their lists were, ‘list hygiene’ we call it, and who consents to be on there. Because if we don't, we get block listed. We get banned from Gmail across the entire platform, not just from that one email address, but everything at which point, bye list.
Now, email marketing is the second most profitable type of marketing there is, okay. It's direct, it's accountable, it's straight to the person. And you can tell, for example, who has spent because of an email marketing campaign, 'cause you can track it.
The only one that actually surpasses email marketing is SEO. And these days, of course, maybe AEO or AGIO, depending how you say it. Purely because it's fishing in a much larger sea. And you have the matching again, as I said, but even that slowly being surpassed because of AI. So it goes back to having your own close environment and your own list. You can see where I'm going with this.
The point is that, go back 20 years ago, everyone would be like, "Oh God, you'll never be able to regulate this industry. You'll never be able to get rid of all the spams. You'll never be able to tell the companies they've got to watch what they're doing, the list, because it's their own list. They've got the data." The companies were in control of the data, not the consumers. But these laws were passed. These things happened. The regulatory environment changed. There's no way you could get away with what you did 20 years ago. What you do now. Yeah, if we want to ask why I'm an optimist that AI will be regulated, controlled, it's 'cause we're going to force it on ourselves. We're going to be so sick of it. And I think in some ways we're at peak AI hype at the moment. We're going to be so sick of AI polluting our water stop. We're going to do something about it.
Karen: That's a great summary and a great analogy I think too for comparing — that now we've got very good spam filters, and almost all email tool providers offer that feature, and it works way better than it used to. A few get through from time to time, but it's definitely improving. So that's a great way to look at it. You said you're an optimist and I think I am an optimist too in a lot of regards. I believe that we as humans can do this. We can handle it. We just need enough people to care about it. It's like everything else with the environments, right? We need enough people to care.
Natalie: I think it'll come. The other thing to bear in mind, it sounds weird because I feel like I've been thinking and writing about it forever, but I'm really not a doctor. And so are you. Most people aren't thinking deeply about this at all. There's not that many. It sounds odd, but the internet and the online society as we know it, people do go online with their phones and what have you, but they interact as consumers. They do it through scrolling. They'll read the occasional email. Obviously they'll use YouTube or maybe see it on the computer. But not to the extent that you and me do.
We are living it and wading into it in a far greater extent than the average consumer does. It's only when it hits home in their workplace, when it hits home in their life, when they're questioning the school kid, "Have you done that homework, or did ChatGPT?" That's when they're going to start getting pissed off. It's coming.
Karen: That's a great summary. Natalie, I want to thank you for making time for this interview. I know it's late in your evening, so I appreciate that. Is there anything else that you would like to share with our audience?
Natalie: Thank you. If you're listening to this, you've been a great audience. And thank you, Karen, for being an absolutely great host.
Karen: Oh, my pleasure. It has been a lot of fun, Natalie. Thank you.
Interview References and Links
on Substack (Plotted Out)About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)
Share this post