Introduction - Paul Chaney
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and in our 6P external podcasts).
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Paul Chaney
I’m delighted to welcome from the USA as our next guest on “AI, Software, and Wetware”.
Paul, thank you so much for joining me on this interview. Please tell us about yourself, who you are, and what you do.
Sure. My day job is as President of Prescriptive Writing. It's a B2B content writing and editing agency. I say the term President a little bit gratuitously, because it's a solo effort on my part. I do outsource some things from time to time. But I have been doing digital marketing since before Google. And if anybody wants to know how long Google's been around, they can Google, I guess! In all forms, variants, you name it, I've probably done it, working on both the agency and client side. I've got quite a bit of agency experience as well.
However, I will tell you that my current focus is really more about AI, especially as it deals with the intersection of AI ethics and marketing. And I publish the AI Marketing Ethics Digest. It's a weekly newsletter. It is the only one of its kind, so far as I know. But that takes up pretty much all of my time these days, those two efforts.
Yeah, it's a really interesting and very timely area to be in, so, great to have a newsletter focused on that. Tell us a little bit about your level of experience with AI and machine learning and analytics. For instance, have you used it professionally or personally, or have you studied the technologies?
Well, my focus is on using it in my business. As I said, I'm more focused on B-to-B content writing and editing. And so I tend to generally focus on the generative tools and that kind of thing.
For example, one of my agency clients has me writing a great deal about technical aspects of AI - however, mainly through SME interviews and with data scientists, IT people, people who are bonafide AI experts. And so I'm learning a little bit about it from that perspective.
But I would not say that I have any great grasp of the more technical or scientific aspects of it. I like to say: what I learn, I kinda pick up through osmosis, through these interviews and so forth. But, again, my efforts, my day-to-day business, I use it regularly - again, from a generative aspect, in terms of creating content, helping me to edit content, helping me to brainstorm content, whatever the case might be.
And do you mostly use the generative tools for writing text? Or do you also use those for images and for music or any other types of content?
I do both, or I guess you could say all 3. You mentioned music. Mainly it's text. I do very little with image editing, except with my newsletter. I try to provide some type of image. And often I'll use ChatGPT for that, or sometimes I'll use Adobe Firefly to create an image. Used to use Midjourney. I don't use Midjourney, which is an image creation AI. I don't use it as much.
In terms of music, that's more on the personal side of things. I am a songwriter or have written songs, and I use a tool called suno.ai to help me create these songs. So I guess you could say yes in all respects, but primarily through the use of text.
Okay. That makes sense. And so that’s your professional life. How about in your personal life? Other than the music, is there any other area where you would use it? For instance, do you use ChatGPT to help you write emails to friends or family members, or for any other purposes like that?
No. I don't use ChatGPT for that. But I do use Grammarly, and I have it plugged into pretty much everything. So Grammarly inserts itself into my email text writing, or my texts, and all of that kind of thing. So it's constantly correcting me. Sometimes I pay it no attention, however. But really, I guess you'd say it's more of a business tool.
And then the songwriting aspects with Suno, and some of the other tools I use for editing as well. And I can break those down and give you some examples if you like.
Sure. Yeah. And actually, the next question is if you can share a specific story on how you've used a tool that has AI or ML features. I'd like to hear your thoughts about what AI features worked well for you and which ones didn't.
Sure. And let me just give you a quick rundown, though. I use a trifecta of tools right now. Primarily, it's ChatGPT 4o, Claude, and then Perplexity. Occasionally I'll use Gemini, infrequently. I used to use a tool called copy.ai, and then one called Writer, and a few others, but I had to trim my budget. So I pay for the premium versions of the 3 I mentioned. I use them various ways.
Perplexity, which if people don't know, it is an AI search engine, although it really positions itself (and I posture it) as an answer engine. So I use it really for that purpose.
ChatGPT, for creating custom GPTs that I use with my writing.
Claude, I'm using less often - mainly for comparison purposes. Maybe I'll create something in ChatGPT, and then I'll go over to Claude and just see what Claude might do with the same thing.
I also use several editing tools. I've been using Grammarly for years. I use tools like Quillbot, one called EasyBib - these are for plagiarism, to make sure I'm not plagiarizing, paraphrasing. And then various AI detection tools as well.
But just to give you a couple of examples: one is custom GPTs. One of my clients - part of their process is that, and it's the one where I'm doing a lot of the technical writing. They want me to create an outline first, to show to their client. These are things I'm ghostwriting, by the way; I don't get my name in the byline. So they show the outline to the client. The client makes some edits or checks it off, says it's all good, and then I write the article.
So I use a custom GPT that I created to help me create those outlines. And the way I do that is I've given it specific parameters and I've said, ”You can't do this, you can't do this”, that kind of thing. And then I will take the transcript of the interview that I've done with them. I'll upload that along with some interview summaries. Like, for example, Zoom has a great summarizing tool that's part of its AI. I'll upload that into ChatGPT. And then it will give me an outline, which I may feel comfortable with, or I may not. I may work with it; probably in most cases I will.
Another example of a custom GPT I created is to write SEO titles and meta descriptions. A couple of my clients want that information as well. So I've given it, again, specific parameters. And I will simply copy and paste or upload the article. And in 2 seconds flat, it spits out an up-to-60 character SEO title and up-to-150 character meta description.
Again, sometimes I have to, you know, modify it a little bit here and there. I have noticed that ChatGPT likes to use the word ‘discover’ a lot. So I've kind of told it “maybe don't use that so much”. But those are just some examples of how I use it in my day-to-day.
That's funny. I've heard some people talk about how there are certain words that are being recognized now as potential flags that AI was used to do something.
Oh yeah. Yes, absolutely. There's several, once you've used it for a while, you begin to see some of the patterns. Because it does what it does based on patterns, obviously. You can kinda work against those. But I also use, like I say, AI detection tools. And if something comes up that really seems to be over, just over the top a little bit, then I go back and I rework things myself more manually.
But then let me just say this about AI detection tools. I don't think they're 100% accurate, and I would use them guardedly, or let's say “with a grain of salt”, okay?
Yeah, there's a lot of talk about these AI detection tools and how they're being used in schools. And they're accusing students who didn't use AI of using it, and marking them down or disqualifying them. And it's really causing quite a bit of trouble. I mean, AI tools have their benefits, but if you think about applying them across thousands or hundreds of thousands of people, even a small error rate is going to cause pain for some people.
True that, yeah.
Definitely tradeoffs there.
Someone actually took the Declaration of Independence and put it in one of those tools. And it showed that it was 100% written by AI. So, unless our forefathers knew something about technology that we didn't, I don't think they used AI!
Right! Yeah. So what they're keying on there is that that was part of the training set, and it showed as an original source. And they're recognizing that if someone's claiming now that they wrote that, then yes, it is pulling from
From the training data, yeah. Yeah. That's a good point, yeah.
You mentioned that you end up reworking some of the SEO titles and meta descriptions. Do you also use that when you generate the SEO titles and meta descriptions for your Substack articles?
I do. I use it pretty much all the time. Unless something just is so obvious that I don't have to use it. Sometimes the titles in my Substack newsletter, they're pretty self-evident, so I wouldn't need it. Meta descriptions, give or take.
I'm curious, how often do you find that you have to tweak what comes out of the tool?
I would say half to 75% of the time. Now that doesn't necessarily apply to SEO titles and meta descriptions, that kind of thing. But let's just say those outlines, because they're very technical. Some of these people are talking about using AI and ML in ways that I absolutely do not understand. They're data scientists, for example. Then I probably will go back in and maybe just rework things, if I see a need to do that. And very often I do.
This is a bit of my philosophy, if you don't mind me interjecting it. And that is: I see it as a partnership, okay, with machines. And I do some of the work. Machines do some of the work. And if I don't like what the machine does, based on maybe subjective thinking, then I'm going to take it and do with it what I think ought to be done with it. And again, the more you work with these things, though, the better you get at creating quality-level prompts.
Now there may come a day when we don't need to do prompts, but I don't think we're there yet. I don't think AI is that smart. But the more you work with it, the better you get at it. And so it's like the old adage, you remember “garbage in, garbage out”, right? Well, that still applies in this venue as well, but the better you get at it with defining what constitutes a useful prompt, the better the quality of the output that the AI is going to give you. Now, that is a matter of trial and error as much as anything else.
There's everybody offering all these $49.95, get these 100 prompts, or whatever. I find it's just a matter of having a conversation. They call it ChatGPT for a reason. So I have a conversation with it. And we go back and forth sometimes til I get the output I want. And if, in the case of ChatGPT, I don't get the output I want, then I may go to Claude and try it there. And generally speaking, between the two, it works out.
Now let me say this about Claude too. I don't know why they have chosen to keep Claude from seeing the web, unless that's changed very recently. That to me is a drawback for using it. I think from just a writing and content creation standpoint, it's probably better. I certainly think it's more ethical in its approach than ChatGPT. But ChatGPT enables you to see the web, and I tend to have to do that pretty much all the time.
Yeah. I'm curious as to what your experience has been with these generative AI text tools hallucinating or confabulating or making up things that don't actually exist, and how often do you find that when you look into what it gives you that it's not accurate?
I found that much less than when I first started. And the fault of that is probably me, because I didn't know what I was doing to begin with, right? And I would give these inane prompts, and it's trying to please you. It really is. Again, I think the better you get at it, the less you're going to see that. And I think hopefully, as the tools have developed now, we're with version 4o with ChatGPT, that you see less of those kinds of things.
But I'll give you a little bit of a horror story. And this goes back to my early days, nascent understanding of how the tool should be used. I wrote an article. I included some information that ChatGPT had fed me that I assumed was correct. My editor, when I submitted the article, said, “Where did you get this information? Because I can't substantiate it anywhere.” And I went, “Oops”. [laughter]
And so from then on, I always fact check. And I advise people, don't take everything for granted. Fact check. Make sure that you can substantiate. And so one of the things I do now is I ask for source material. And generally speaking, I would say literally 9 times out of 10, it's going to give me specific articles or reports, or that kind of thing. Statistical data that is accurate based on maybe something that Gartner had written, or McKinsey, or somebody like that. Always looking for substantive materials. Not just a third-hand blog post where somebody quotes somebody else, who quoted somebody else, and you can't find the source. That's not going to work.
So it does take a little bit effort on your part, but I think all in all, I'm seeing less of that kind of activity taking place. I won't say it's non-existent. But I will say, it's much less now than it was then. And I think that's just part of the learning process.
Yeah. One of my earlier guests, Stella Fosse, was using ChatGPT to help her write her marketing plan for her new book that was going to be coming out. And she asked the tool for a list of podcasters she should contact about going on to promote her book. And I think of the 10, maybe 8 of them didn't exist, or something like that. So, not very useful.
Oh yeah, I’ve seen that, yeah. Oh no. But, again, that's why you have to vet everything, and just make sure you check the facts. And, you know, that is part of the human element. I think more of my job these days seems to fall on the editing side, as opposed to the actual composition side.
Now, and let me just say a quick word about that, if you don't mind. You know, there are people who rail against that. The kind of writing I do, there's not a lot of story to it. It is very technical kinds of stuff. It's based on what these SMEs tell me. So I'm relying on their expertise and knowledge more than anything else. So, for me, it's a very valuable tool. For others who might be writing something different, more character-driven, let's say, I kinda get that, that it's not going to be as useful to them as it would for somebody like me.
Can you talk a little bit about your experience with the tools that generate images, and what your experience has been with that and learning how to get better results from them?
Yeah. Again, it's a learning process. And I don't use images a lot, except for a kind of a hero image, if you want to call it that, for the newsletter. I'm not a graphics person. Years ago, I used to try to design websites and found out very quickly I'm not a graphics artist by any means. So I'll leave that to other people.
My experience has been trying Midjourney. Midjourney is very popular – a lot of people use it. There is a difficulty in using it, unless it's changed recently. And I think it has, actually, where you had to use it via Disrupt, and that just seemed like a lot of work to me. And I've found some limitations in its use.
I've tried Adobe Firefly, which is okay. I'm never quite fully happy with the results I get. Again, that may be more on me than it is on the tool. So I kinda default to DALL-E, which is, you know, a part of ChatGPT, and it's a back-and-forth process. I do like the fact you can have a chat with it and say, “okay, well, that was good, but, you know, can we do it this way?” And not every time does it give me the results I want. Maybe, again, that may fall to me. The better I get at it, giving it good descriptive kinds of information, the better output it's going to give me.
So my experience doing that has been, very often, less than satisfactory. I'm not really a good judge on that side of things because I don't have that graphics, artistic acumen that other people may have.
So other than the reasons of cost, which you mentioned earlier, are there any reasons that you have avoided using AI-based tools for some things? And if so, can you share an example of when and why you chose not to use AI for that?
Yeah. I'm pretty pro AI tool use. I will tell you. I really don't know of one, other than the cost factor that I've avoided because I just couldn't afford it. And some of them are more expensive than others. But there's really nothing I won't try to use them for.
Now, I won't say they perform well 100% of the time. But I've learned enough to make things work most of the time within the bounds of what I need for my business.
Now I will say, let me just give you one example, and I'm calling them out only for example purposes, not to in any way discredit them, because I think they're a great tool, but it's called Writer, w r i t e r.com. And it takes a more structured approach. So you kinda have to follow what it wants you to do, more so than I think of ChatGPT or Claude being a little bit more free-form and conversational. And so there might be a blog post writing template that Writer would want you to use. Well, that doesn't always work for me.
But really, I'm going to be game for trying anything, and see if I can make it work. And so far, it's really again come down to that trifecta of ChatGPT 4o, Claude, and Perplexity.
Alright. So one common and growing concern nowadays that we hear about is wondering where AI and machine learning systems get the data and the content that they train on in order to do the generations. And oftentimes, they're using data that users put into online systems or they published online. And the companies are not always very transparent about how they plan to use our data when we sign up.
Grammarly is one example where unless you have a professional account, a paid account, and you can turn it off, they're going to use whatever you put in, whatever you're letting it read, and using that to train their system.
So I'm wondering how you feel about companies that use your data and content for training their AI and ML systems and tools and what you think. You write about ethics for AI. So do you feel that for a company to be ethical that they ought to be getting consent from and compensating the people whose content they're using for training their tools? Or what are your thoughts about that?
Yeah. I think it comes down to purpose. Now keep in mind, I'm approaching everything from a marketing standpoint. So let's just use, for example, S E O or maybe A I O optimization, right? So would you want to make your content available as source material? I think in most cases, yeah.
Let's go back to Perplexity. I use Perplexity all the time now for in-depth research, and I'm going to the source material it gives me. Well, if I'm kind of getting my name out there, getting my brand out there, getting my products or services out there, then I want it to be sourcing me. So for example, that's an indication where it is appropriate and certainly even desirable.
But I will give you sort of the converse of that. And I did an interview for one of the issues with David Meerman Scott. He's a popular marketing author. He told me that ChatGPT had trained on all his books, and they didn't ask for consent. Nor has he been compensated. His publisher wasn't asked for consent. They just took that information and absorbed it.
And that got me thinking about my own books. I've written a few. And it appears if they've done the same thing with mine. Now David's books, my books are all published by well-recognized publishers like Wiley, McGraw-Hill. And you would think these publishers are going to be up in arms about all of that kind of thing. Because, obviously, their job is to sell books and not give away material.
And, you know, I see that beginning maybe to change eventually. But I don't know that it's going to change wholly until we get some government intervention that says, ”You're violating copyright laws, and you've gotta stop”. And this has to be in place, these things like consent, like some kind of compensation.
Yeah. The current status, the last time I checked, was that there were over 30 active lawsuits in the US on copyright infringement against the major generative AI companies, and some book publishers are certainly in there. Some are also making deals with the companies to license the contents of the books that they publish. There's some questions around whether or not the authors will ever see a dime of the money that they're negotiating for, which is another question.
Right. Well, I believe from a responsible use standpoint, ethical standpoint, on the part of the developers, that is the model that needs to be followed, that licensing, and that kind of thing. And, you know, I think the developers have the responsibility. The onus is on them to say, “Would you allow us to use this material?” And if not, then, okay.
Now let me say also, in defense of Writer and also one called Lately, which is more of a social media platform, they don't do all of that kind of thing. They're very guarded in their use of what data they train on. And I can't give you specifics or chapter and verse, necessarily, but I know some of these tools are more trying to be ethical. Let's put it that way.
Also, you know, we talked about music a minute ago. Suno, I know, has had all the major labels, and one called Udio, suing them, saying that they're infringing on copyright by training on these materials. And, I think Suno finally came to the point where it said, “Well, okay. We're scraping content, but it's fair use according to copyright laws.” That's what they're asserting.
I'm not an attorney, so I don't get into that aspect of things. But the way I look at it is like this. Some years ago, if you remember, there was a whole upheaval or uproar over sponsored content, where people were, like these influencers for example, where they were touting a product, or maybe a brand or something like that, without revealing that they were being paid for that purpose. And the FTC weighed in and said, “no, you can't do that”.
Well, I think, again, it's going to probably require some kind of intervention on the part of the government. Maybe it's the FTC, I'm not sure who, to get all of this stuff figured out and some laws or requirements or regulations laid down, to iron out these kinks. But let's face it. Isn't that the case, generally, when anything new comes along? There's always that period of confusion. Again, it's a bit of a wild west now. It won't always be that way. And then we'll be complaining about too much intervention at some point, I think.
Yeah, there's definitely a balance to be found. And right now, all we have are the courts to try to hash it out. But there's a joke going around that “If you steal one book, it's copyright infringement. And if you steal all of them, it's fair use.”
Yeah. Right. That's good.
Not really, obviously.
I’ll have to remember that. Yeah. Yeah. It's good.
As a user of AI-based tools, do you feel like the tool providers of all the different tools that you've been using, do you feel like they've been transparent about where they've gotten the data that they've used? You mentioned Suno and what they finally admitted in the lawsuit that, yeah, they scraped publicly. But other than Suno?
I really think it depends on the platform. I don't think ChatGPT is the poster child for ethical use. I think I've already mentioned Writer. I mentioned Lately - again, a social media platform. I think they're more distinguishing in where they get their data. I think Claude has what they call “constitutional AI”. I think it's a little bit more discerning in where it draws data from, and that kind of thing.
But I really can't say on the whole that we're seeing a lot of transparency necessarily taking place. And I think, again, it's going to require somebody, an external force. You know, is the industry going to govern itself? Maybe that's a “fox in the hen house” kind of supposition. Or is it going to be more industry-related regulation through some, say, associations? Or is it going to require federal government to weigh in on this kind of thing?
And I don't know that the upcoming administration is really all that necessarily focused on it. This is my opinion, okay? That administration may just be leaning more toward keeping hands off a little bit, where that's kind of concerned. So I think it's a wait-and-see proposition.
But to answer your question more directly, I think we've got a ways to go before we see these platform developers and operators being fully transparent, in terms of where that data is coming from. And then, as we talked about, asking permission to utilize it, and even giving compensation for that purpose.
Yeah. I'm going to have to check out writer.com and the others that you mentioned. If I find someone who is trying to operate ethically, I like to call them out and give them some credit, because it's definitely harder. They're climbing a steeper hill than everybody else if they're trying to operate ethically.
I agree with you.
I will have to check them out. So thank you. The ones that you do use that you feel are operating ethically, I'll go check them out.
Zoom does a good job with its summaries and Fathom as well. So I use those, you know, in helping me create content.
Has a company's use of your personal data and content ever created any specific issues for you, such as privacy or phishing or loss of income? You mentioned that you've written books and that you think they've been used as input for a tool. Or maybe the content that you've created for your clients? Any concerns in that area?
Not to date. And I say that it scraped my content and trained on it. I believe that it did. But sort of in defense of my content, I created a custom GPT and uploaded one of the books called The Digital Handshake, and basically told ChatGPT to create this engine that people can post questions, and it only draws information from the book. However, the book is 15 years old. So I did add the option to allow it to draw from the web, so it'd be more current information, because some of that stuff I include in the book no longer exists. It was just sort of my little passive-aggressive way of saying, “Okay, I'll get you back, ChatGPT, for doing that. I'll use you against you and create my own custom GPT”.
But nothing nefarious, nothing that I feel like would be injurious to my own well-being, financial or anything like that.
So I'm curious - you mentioned that most of your writing is ghostwriting for a client. Do your clients have any concern about the fact that what you're writing for them is being exposed to these tools, and that they're likely to be picking it up before they even get their hands on it?
Nobody has ever said anything to me about it. And most of the clients I work with are agencies. So they're outsourcing content creation to me. And the only thing that I really run across with any of them is - some of them are more pro use of AI than others. So I have to adjust my approach to fit their desires and preferences. But as far as any of what you're asking about, not today.
That's good. I've been watching some discussions on Substack that I'm sure you've seen where someone did an analysis of Substack posts, and the ones that are the most popular, and how often AI was being used in generating them, and which publications were doing it.
And some sites are starting to restrict whether AI-generated content has to be flagged or whether it's even allowed to be used and posted there.
I will say this. Sometimes I referred in some prompts in ChatGPT to a specific article or newsletter issue, and it says I can't see it. So maybe Substack has put in some kind of blocks or something.
Yeah, in our newsletter settings, there is a place where we can say whether we do or don't want to allow AI tools to train on our content. So I know some people have exercised that, and I've actually got it on mine because I don't want to assume that my guests want their interview content to be trained upon. But the downside of that is this - even Substack warns us - yes, we can set this status to say, “Do not train on this content”, but some unethical bots do it anyway. So unless it's a paid content behind a paywall, there's really no way to ensure that it doesn't get picked up. But they do warn us about that.
Well, I wouldn't be surprised. I'm not surprised by anything that happens on the Internet where marketers are concerned! That's for sure.
Yeah.
Which is one of the reasons I created the newsletter to talk about the ethical use, responsible use of AI.
Yeah. One thing that I think we're seeing - you mentioned patterns. There's an emerging pattern that public distrust of the AI and tech companies has been growing. And I think it's in part because we're starting to realize just how much of our personal data and professional data they are scraping, and using for purposes that we never envisioned when we gave consent to provide that data. So I'm curious what you think from your perspective in AI and ethics of marketing.
What do you think is the most important thing that AI or tech companies would need to do to earn and to keep your trust that they'll use your data responsibly? And do you have any specific ideas on HOW they could do that?
Yeah. Well, I feel like this. I feel transparency breeds trust, and trust breeds loyalty. So, you know, there's been distrust of technology ongoing, before AI ever entered the picture, and AI perhaps has only added to that. But I think when we get to the point where we're willing to be honest, transparent, when we disclose, for example, how we're using it, that we ARE using AI, how we plan to use that information that is gathered by virtue of using AI; and then do things like: ask for consent, give the consumer the right to opt out. That's going to engender more trust as well. So it does come down to putting ethics and responsible use at the forefront, instead of it being an afterthought.
“Transparency breeds trust, and trust breeds loyalty.”
And I think it really also comes down to just creating an ethics-oriented or ethical-minded culture. And that starts at the top. So CEOs, CMOs, other C-suite members need to be focused on this kind of thing. Because if they're not going to do it, then the organization's not going to do it. And I think that includes things like taking steps to form an AI ethics council or committee to oversee its use in the company. I think that also includes creating an AI ethics policy, both of which I talk about in the newsletter. And I also think specifically, to marketing again, that's my focus - the marketing department needs to have its own policy, due to its unique relationship with the public.
And so I think all of these things go into making this happen. And again, is it going to happen without some kind of external pressure or force? That I can't tell you. I would like to think that the better angels of these companies might govern, as opposed to the devil on the other shoulder, so to speak.
I have a feeling that it mostly just comes down to business, and they do what makes business sense, without necessarily thinking about the angel versus devil aspects of it.
Right. Exactly. Yeah. Well, it does come down to, I think at this point, what the company is willing to take on, or refrain from, as the case may be, in terms of ethical and responsible use.
Yeah. And there's 2 aspects that I think are interesting. One is that you can talk about the US and what we have as far as the laws that we’re working on, the NO FAKES draft act and things like that. But we live in a global world, and content is being sourced globally, and it's being reused globally. And so it's not just the US that we have to think about. We have to think about how we interact with other countries, and they may have different, tighter or looser, standards than we have, or that we decide to put in place.
Well, you raise a great point, and that is, how do you make all of this work? I mean, the EU has an AI law now and there's GDPR. You know, California is trying to enact some things. Well, if it's left up to the states, and other states are looking at what California is doing. And then you get into other countries. How about in the Middle East, for example, or in Asia? How are we going to be able to manage all of this? It's almost like a house of cards in a sense, I think, if that's an appropriate analogy. What works in one country may not work in another country. And how do you come to terms with what is really a base level of what's acceptable and what's not?
And I think this is going to take a while to work out, and it's not going to be something that's going to be easy. That's why I think it behooves these platform developers, it behooves the marketers on the other end who are consuming the use of these tools. And, maybe even consumers themselves weighing in and saying, “You've got to do this in a way that is transparent, in a way that is accountable, in a way that's honest”. And just, again, put that at the forefront. And I don't know that it has to be all that complicated, frankly, if we will just take the responsibility ourselves.
Yeah. And that’s one thing we can talk about, what's ethical and what's legal. And some people say, “well, it's legal, so I can do it”. But something that's not ethical shouldn't have to be illegal too for us to not do it.
Right. Right.
One of my earlier interview guests, Dr. Julie Rennecker, she's a consultant who works with startups. At one point, she raised this concept of “imprinting” from founders, where the attitudes and the policies of the founders can be seen for decades (if the company survives). And if they start out with an approach of ignoring ethics and just charging full speed ahead - it's very hard to correct that after the fact. So it's really important to try to get that right from the beginning.
Yeah. Absolutely. You're absolutely right about that.
Paul, it's been really great talking with you. Is there anything else that you'd like to share with our audience today? You mentioned that you write books, or in your newsletter - anything that you want to talk about?
I am working on a book that focuses on AI marketing ethics. Shooting for it to come out in first quarter of 2025. I think this next year will be a very signal year in terms of people realizing the need to embrace ethical and, again, responsible use.
I will say, if you don't mind, I do offer also a half-day course for marketing teams on AI marketing ethics to help them develop policies, understand the fundamentals. And it's very interactive. They do a lot of the work themselves. I'm more of a facilitator, almost.
And I'm working on a webinar to introduce ethical and responsible use concepts. Hopefully, that'll come out, again, also, early next year, and it'll be available live and on demand when I do produce it.
That sounds great. You sound like you're pretty busy! Where's the best place for someone to find out about your book, your course, your webinar? Is it through your Substack newsletter? Do you have another website, or how would people find out more about this?
No, they can just go to aimarketingethics.com. That redirects to the Substack URL, but it's simpler to remember. And I would say everything you would want to learn in that regard, you will find there.
My business website is prescriptivewriting.com. I do not write prescriptions for people, but my type of writing is prescriptive, right? So I chose that term, but prescriptive writing.com. That's my B-to-B content writing and editing agency, if you would. And I put right out there that we use AI tools in the creation of content. So I'm not trying to hide anything from anybody.
If you don't mind me adding this as kind of a last word: If you're ignoring its use, you're going to suffer the consequences. If you're just looking at it a bit askance, you really need to begin to “dip your toe in the water”. You don't have to “jump in the deep end of the pool”, kinda like I did, all-or-nothing sort of an embracing of these things, but you do need to start.
And if you're already pretty much utilizing it in your company, I would say: look at audit, take stock of how it's being used. If you don't have an AI ethics committee or council, you need to form one. It needs to be interdepartmental, across the board. If you don't have an AI ethics use policy, you need to establish one.
And, again, not trying to sound too self-promotional, but we have all kinds of resources on that site, aimarketingethics.com, that can help you to do that. And then if you know, you're certainly happy to reach out to me, if you want to talk further, anybody who wants to talk further about it.
Alright. Yeah. You mentioned the importance of having people define their policies. And it's great that you're transparent about your own writing and you have your policies - that you do use it, and this is how and this is why. And I created a policy page in my Substack when I started it earlier this year to say, “here's what I do and don't use and why”. And, mostly, I'm trying to walk the line of: I'm open to using it, but I want to use tools that are ethical. And so I've always been on the lookout for the more-ethical tools, and trying to help promote the tools that are being ethical. That's that's the way that I approach it.
Your point about marketing: if people are already using AI tools, it needs to have oversight. I think that goes back to the idea of imprinting. The best time to do it would have been when you started, but the second best time is now.
Right.
Like planting a tree, right?
Right. Well, I always say, you know, AI ethics is human ethics. You're either a responsible ethical person or you're not. And that's going to guide your use of anything, not just AI tools. So it really comes down to personal decisions, and whoever the decision makers are in a particular company. Their use is going to tend to reflect that, generally speaking, I would think.
Alright. Any final thoughts?
There's probably a whole lot more we could talk about, but I realize we're up against time. So we'll leave it there!
Ok, great. Really glad to have you on the interview series, Paul! Thank you so much for your time.
Thank you for the opportunity. Yeah, I appreciate it very much.
My pleasure.
Interview References and Links
Prescriptive Writing (Paul’s B2B business website)
AI Marketing Ethics Digest on Substack:
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”!
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (it’s free)!
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome 😊
Series Credits and References
Audio Sound Effect from Pixabay
Share this post