Introduction - Phil Pallen
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.

Interview - Phil Pallen
I’m delighted to welcome Phil Pallen from the USA as my guest today on “AI, Software, and Wetware”. Phil, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
It is such a pleasure to have this conversation with you. I've been looking forward to it. I am Phil Pallen. I'm a brand strategist. I sometimes say I have 2 jobs. I have my adult job. I have owned a branding agency for 14 years. So I help people and companies position, build, and promote their brands. And my fun job really has actually kind of taken over my focus nowadays, which is working as a content creator.
I love teaching people about cool tools, apps, workflows, productivity hacks. I don't really like the word “hacks”. But if we, you know, if we can add more joy into our day, through our work, through discoveries, through spending time doing things that we love in areas we make high impact, then I think people in general are just gonna be a whole lot happier. So I love teaching. I get really inspired by self-starters, people that want to do the work themselves to be able to grow their brands. And AI has really shaken up my world in a very exciting way as a strategist and as an educator.
Great. That's an excellent overview. So thank you for sharing all that. I'd like to hear about your level of experience with AI and machine learning and analytics and how you've used it professionally or personally, or whether you studied the technology, taking courses - what you've done with it so far.
Yes. So my earliest foray into this world was video creation on YouTube, specifically, forming partnerships with brands that have hired me to create and review marketing platforms and software. Notably, one of the first ones I did that is AI-specific was a brand called Synthesia. It's one of the first YouTube creators that used their fancy new technology at the time to create a custom avatar of myself and have some fun with it in a YouTube video. It's like, “This is the real me, and then this is the AI avatar of me.” People got a kick out of that, and I've done a handful of videos for them that have done quite well.
I believe, although I don't know exactly, but I'm pretty sure it's through some of those videos where I have just attracted a ton of inbound interest, really, from AI and marketing brands.
So how fun, Karen, that I have this front row seat into trying all of these cool tools and apps. And maybe what's different in my approach is that, yeah, I've got a lot of content creator friends that will create content around the tools that they use and love.
For me, I don't assume that I'm the demographic necessarily of the app that approaches me. I recently did a video that was specifically for Amazon and e-commerce brands. That's not really my world, but it doesn't mean that I can't try it. Highlight what I love about the tool and get it in front of the right person, who maybe that tool is useful for them.
So that has kind of driven this, and, obviously, ChatGPT became generally available. And I, like every small business owner, was excited by the possibilities, but also overwhelmed by my existing to-do list. It's kind of like, “How do we figure out this new technology without taking on a new hat? Honey, I already got enough hats that I gotta look good in all of them!” As a small business owner, right, social media and all these other things have really put some pressure on the average small business owner to get a lot done in the day.
So for me, I'm not a chip guy. I'll read the emails, but I don't follow the latest advancements and developments. I didn't study AI. I didn't work for one of the big AI companies. But I have found my interest, maybe my nerd tendencies, in specifically AI tools, being able to make recommendations and try these out. I've been at it for 3 years, really.
And the book came about because the publisher actually found me on YouTube, and they said, “We like how you communicate some of this complex stuff for people that are overwhelmed by this. We like how you communicate it, and we'd like you to write a book.” They gave me that deal before they knew I could even write because, really, they had only seen me in video form.
And an exciting challenge, you know, to tackle a project like that when really I'm actually more of a video guy and a stage presentation person. It's been fun and, equally, challenging.
Yeah. That sounds awesome. So what's the title of your book, for our audience?
It is called “AI for Small Business”. And I describe it as really, it's actually been equal parts workflows, recommendations, things that I've learned independently for myself, my own business, and for my clients' businesses, working in a branding agency. You know, a project might be initiated because someone needs a website or a brand identity. But really when I'm in the thick of it with my clients, we're building their business. It's not just the wrapping paper for the gift. It's figuring out what are your goals and how are we going to use branding as a vehicle to get you there.
So by nature of my world, I've worked across so many different industries, businesses of all types and sizes. And so this book was exciting because I was able to draw on real-life clients in a variety of industries, paired with the tools that I've been trying and also researching.
You know, I even don’t like saying that I'm an AI expert. I think I'm an AI strategist, but how can I claim to be an expert in something that I've really only taken an interest, you know, and a fascination in since it's been generally available with tools like ChatGPT? I'm sure I've had interesting access through tools and apps and collaborations, Adobe being my biggest brand partner.
But I'm learning this along with everyone else. And later chapters in the book go into things like operations, research, security. I am not an expert in those things. In fact, I want someone who identifies with me on that to be able to open the book and go, “Oh, yeah. I should try that tool.” Like, open the book at any point, you know? Just take it. You don't have to read in order. Just read it, and maybe there's a tool or a workflow or a prompt example that can inspire you to implement it for your business. And that's going to be stress-alleviating, instead of stress-inducing.
That sounds good. And I know one of the things that was called out by Melanie, who introduced me to you and to your book, is that you also focus on how to use it ethically, and which tools are ethical for people to use. We'll talk more about that later.
Yeah. That's a hot button topic for me. I know you and I have similar stances on that. I love how much you care about that. I care about it a lot.
Great. Yeah. And we will definitely talk more about that. For now, can you share a specific story on how you used one of the tools that includes AI or ML features? You mentioned a few. You mentioned YouTube and ChatGPT and Adobe. Can you talk a little bit about how the AI features of those tools worked for you and didn't? What went well? What didn't go so well? What was cool and not so cool about the avatar that it made for you?
Yeah. Oh, I have so many, but let me share a workflow that I think is maybe exciting for someone who might feel overwhelmed. Like, where do you even begin? I remember interacting with ChatGPT for the first time and not even knowing really, where do you start?
It's funny. We go to Google, and we don't think twice. We ask a question, and we get an answer. And yet with ChatGPT, I've come to learn that you don't necessarily even need to bring a well-formulated question. You just start a conversation. In the same way, Karen, that you and I started this call, you're asking me questions. You were kind enough to share them ahead of time, but I actually said to you, I didn't read or plan for them, in the way that I'm not going to plan the exact words that I'm gonna have if you and I are sitting down and having coffee.
I'm not a believer in the perfect prompt or someone prescribing to this idea that, like, “Here's the best way to interact with ChatGPT”. That's like me saying, “You can only communicate in English with me the way that I tell you to”. It's just wrong. We learn by doing.
And so one of the things I've learned is that I don't have to go to ChatGPT with a well-formulated question. Sometimes I can just open a channel and say, “I'd like to ask your advice on a good starting point for staying consistent at the gym. It's January 2025, and I'm good at a lot of things, but going to the gym every day ain't one of them, honey.”
So I could start and just say, “Listen, I'm gonna use this channel to get some advice. Don't take action yet, but here are a few things that I've got on my mind.”
You asked for a tool. The tool I'm gonna give you is Text Blaze, which is one of my favorite tools for saving text snippets. AI is really exciting when we can lean on it to compute information, you know, like, a great deal of information at record speeds, far faster than our human brains can. But there are still things that I think we need humans for - imagination, storytelling, drawing on the human experience. AI can't do that, and I'm not really sure it ever will, unless you're taking the time to input a great amount of information that represents those types of things, storytelling, you know, et cetera. So TextBlaze, I love because it alleviates pressure for me to memorize a prompt.
Anytime that I find myself writing something out more than 2 or 3 times, I consciously make an effort to save that text snippet. So this TextBlaze tool, and I mentioned it because it's also free, or freemium - it might be free up to a certain point, and then you can subscribe if it's something that you find valuable. But I use this tool - I don't even want to say every day, I probably use it every hour that I'm sitting at my desk.
An example: I go to create content for a platform like Instagram. I would create what I would call an Instagram carousel. What does that mean, though, to the average person? That might mean something different. So I have a text snippet that I use that is ‘backslash igcarousel’.
I'm actually going to do it right now, and I'll read it out loud. I don't have this memorized, but I tell ChatGPT what an Instagram carousel means to me. I just typed it. “Help me create an Instagram carousel, which for me means up to 10 slides total, text only, no images. It starts with a reminder or tip, and a strong tease or hook on the opening. And the second slide can provide more context. The remaining slides should provide actionable advice, and the final slide should leave the user with a question”. So rather than memorizing that, which I would never be able to do, I've got a working prompt that I can easily finesse. I can turn that to 15 or 20 slides now on Instagram. They increased the limit. Like, that's an example where my human brain gets to workshop and finesse and tweak instead of memorize.
Oh gosh. There are so many tools that I could give you. You know, any of these tools that we can use to help transcribe calls. I enjoy client calls now. Because I used to have to scramble to write notes. And now I can just interact with my client, either get what I need to say to them or listen to what they say. And now being able to use AI to compute that, it's just been so powerful in my business.
So in the example where you were using this text snippet to prompt for creating an Instagram carousel, when you've used that prompt or something like it, what kind of output do you get from the tool? And how do you have to iterate on it to get it to what you want it to be?
Yeah. That's a really good question. I find myself going back and adding little snippets to that prompt that I shared, in the event that I evaluate that it takes me too long to arrive at what I need. But, generally, because this is an iterative process, I arrive there very quickly.
So let me give an example of how I plan to use that this afternoon. I have recorded 3 podcasts today. I could record my audio locally. So I obviously wouldn't be able to record your answers, but I could record my own answers. I could upload that to Descript, which is my favorite tool for transcription. And I could download that transcription, and I could go into ChatGPT.
And if I'm uploading - you're gonna appreciate this, Karen, I know you are - if I'm uploading anything that has sensitive information, I will toggle the temporary chat feature on. I pay for ChatGPT, and it's well worth it.
I could also use Adobe Acrobat AI Assistant, which is my preferred AI tool, particularly instances when I need privacy. So Adobe tools are not going to train on the data. Specifically, I'm talking about Acrobat. And so I can interact with that document, up to 600 pages and have the same kind of conversation that I could have in ChatGPT.
So those are two examples. I find generative AI is amazing at computing, condensing, summarizing, identifying things. Not so great at copywriting, unless you give a lot of input. But this is an example where I could say, you know, I'm just going to communicate with chat like I would to you, Karen. I would say, “I had this really enlightening conversation earlier in this podcast interview. I've recorded my side of the interview answers, transcript attached.” It's always good with numbers. “Help me identify 5 opportunities for post ideas that I could expand on.” And just like a conversation, I'm just going to go bit by bit.
It's always good to give it a number instead of saying, you know, “Expand this for me.” Say, like, “Expand this into 4 to 6 sentences”, you know, or 2 paragraphs. I find it it does really well with numbers, specific parameters. It's gonna give me 5 ideas there, and I'm just gonna give myself the gift of choice, and find 1 or 2 that I think are inspiring for me in that moment. And then I would define that Instagram carousel. And I'm not doing it right now, but I can pretty much guarantee with that quick workflow that it's gonna get me what I need in about 10 minutes.
Awesome. Yeah. That sounds like a good example of how you
Is that helpful? I love examples. I'm like a nerd for examples. I always try to talk specifics.
I do. I love examples, and I love specific stories where people say “Well, this is what I tried, and this is where it did something stupid, and this is where I made a change.” So, yeah, stories are great.
That's, like, one of my favorite things I've heard all day, because I resonate so deeply with that. I have examples, because by nature of what I spend my time in a week doing, but I am not an expert at this. Maybe Sam Altman is, but, like, we're all just doing our best to figure this out. And I learned just as much from someone else that has been on their own AI journey, and that is what I love about this. I love how much we're all learning and sharing. And I'm not a believer in gatekeeping by any means. I think it's really quite an exciting time where we can all share. “Here's what I've learned. Here's what I've experienced. Here's what's worked, and here's what hasn't.” That's the energy I want this year.
Sounds great. Yes. So you've given some good examples of when you've used different AI-based tools. Are there any cases where you have avoided using AI-based tools? And if so, can you give an example of when, and why you chose not to use AI for that?
I hinted at this a second ago. I would say that I've hesitated to use it for copywriting, caveat, unless I have a good structure or workflow to collect strong input.
So here's an example. I won't write a landing page for a website for my client until I actually get on the phone with them for half an hour. And it's not so much about what I say on that call. I'm extracting the input, the perspective, to be able to get a solid output. So I won't use it for copywriting, unless I've got a lot of input to guide it in a way that makes it still sound human.
Unless you provide a generative AI tool with specific parameters. Like, “Don't say ‘unlock’. Don't say ‘game-changing’. Don't say ‘unleash your creativity’.” I apologize to the people that had these terms in their vocabulary before ChatGPT came out. But now we hear them and we go, “Yuck. That person is, quote, lazy”. Even though I don't think using generative AI is necessarily lazy. Yes, it's exciting about the ways in which it can save us time, but there are still certain things that we shouldn't be cutting corners on.
I'll tell you a quick story. I had a client that updated her bio using ChatGPT. And I went through it, redlining it like a teacher, and said, “You can't say this word even if it's in your own vocabulary. We gotta avoid this, because people will read it and go, that sounds like ChatGPT.” I'm pretty sure ‘groundbreaking’ was one of them! And I said to my client, I said, “I want you to do me a favor. Read this bio out.”
And I have a link. I use a tool called SpeakPipe, which is not really AI in any particular way, but it allows you to create a URL where someone can go and leave you, like, a voice note, basically. I've used it for podcasting, and the limit is, I think, up to 5 minutes. Or you can purchase it, and it's up to 15. But it allows my clients to send me an audio note. I find that if it's, like, through iMessage or something like that, then it might disappear by accident. But this way, I get an email. I can set up a workflow if it needs to go to one of my team members.
But I said to my client, “Read your bio out loud, but I'm giving you permission to go on a tangent. Expand on the sentence as you read it to me, and tell me why it's important to you. Tell me through your experiences why what you do is different than someone else who has the same job title as you.”
And this exercise is from a few weeks ago. This exercise enabled me, through the use of generative AI, to do some incredible copywriting. And I'd love to take credit for it, but the incredible part was what my client added through her content, what you say in personality, the unique way that you and only you deliver that information.
That's a really good example of, like, technology continues to evolve, but my work in branding, I mean, that's just what I've been doing for 14 years, independent of AI. And so we lean on the technology, but in some cases, we gotta keep this shit balanced.
So we've got technology here, but there's always gonna be this craving for ‘human’. We see here this revert back to analog. I've got my little notebook here with a pen. Can you imagine? That's, like, prehistoric! Humans, I think that we like this balance. And I think, even as we have more and more and more technology in our day to day, we are gonna crave this human interaction. That is never gonna go away. If anything, we're gonna crave it more.
Yeah. It makes total sense. And I've been hearing that from other people as well that they are looking for ways to not lose our humanness in the brave new world of AI. And I've got a little notebook too - I don't know if you can see that.
I can see it. Yeah. I love it.
Because typing during an interview makes a lot of noise!
Yes.
So I'm trying to take notes on things I want to ask for follow-ups, but I'm doing it on paper.
That's great. Yeah. There's something too about just, like, the process of writing that note that helps you commit it to memory. And I'm a big fan of anything analog.
Yeah. It's funny. When I was working in corporate, I would take notes in meetings and I would type them. But when I was a student, I always found that actually writing out my notes by hand in my notebooks helped me learn it.
It makes you a better listener.
Interesting the way the brain works!
It really is. But at the same token, like, I'm not a good notetaker. By trade, I really struggled with it in university. And so now tools like Fireflies.ai is a popular meeting notetaker. Zoom now has it integrated into their platform. But there are many, many of these. I'm working on another video right now for one called Notta, N O T T A. They're all really great AI tools that will not only transcribe the interaction, but also give you takeaways, follow-ups, et cetera. And that's, like, alleviated so much pressure on me as a bad notetaker to be able to be fully present with you today and not stress about those things.
Yeah, I'm taking advantage of that as well with Zoom. We've got the transcripts, and the captions on. So, yeah, all those tools are helpful.
That's a good perspective on when you don't use AI-based tools and how you are careful with using them for copywriting and keeping that human connection and human input. I'd like to hear a bit about your thoughts about how companies are getting the data and content that they use for training their AI and ML systems. One concern that we're hearing is that companies are scraping data that we've put into online systems or that we’ve published online, and they're not always transparent about how they intend to use our data when we sign up for their services or on their websites.
So I'd like to hear how you feel about the companies that are using this data and what your thoughts are about ethical AI tool companies, what they need to do to be ethical, and whether they should be getting consent and compensating the people whose data they're using for their training? Or what path do you see for them to be ethical?
Yeah. It's a big question, and I have a lot of thoughts on this. I try to focus on the variables that I can control. And so some of those variables would be what tools I choose to use and pay for as part of my workflow. As much as I love ChatGPT, I'm not comfortable uploading every piece of information about my business into a chat. You know? Sure, there are features like turning on the temporary chat, et cetera. But OpenAI, among many of these early to the finish line, in these races of releasing features and technology, generally available to the masses, it's my belief that a lot of them have budgets for lawsuits. That's just the nature of being first is that you sometimes break the rules.
There are rules that I care a lot about. Personal bias, by nature of my work. I have made a living through creativity. I find it highly concerning that companies, you know, for AI image generation have scraped millions of images off the Internet without permission, created by people that are artists and creatives and graphic designers that make a living doing this. Now images are being generated, sourcing images that were basically taken illegally. That is of huge concern to me.
And so I'm very selective about the tools I use and the tools I pay for. I am biased because I work with Adobe, but I'm incredibly proud of the stance, the effort, the time, and the methodical approach that a company as big as Adobe has taken. Even though they are a leader in technology, how do you compete with the likes of other companies? As I said to you, they know they're gonna get sued, but they care more about being first.
And Adobe has somehow masterfully navigated being a world leader in technology, but also prioritizing the needs and the wants and the desires of one of their most important communities, which are creatives, people that use products like Photoshop and Illustrator to make a living. Those products are synonymous with anyone who is a creative around the world.
And so I'm really proud that they've launched technology like Adobe Firefly, which is its generative AI engine integrated into a lot of its products. Namely, ‘text to image’ or ‘generate image’ is the feature you'll find in tools like Adobe Express.
I love that I can interact with this feature, create an image that is safe for commercial use. Sometimes we forget about that. That's not sexy to talk about, but it's certainly important when a company wants to go create an image that then is a thumbnail on a blog post.
You know, my focus is small business. But as you mentioned, when we get to the, like, enterprise level that, you know, those bigger corporations that have huge amounts of money and time going into training a custom model to serve customers, to serve employees, et cetera, like, the stakes are high. And so you need to know that you're able to create with safety.
That's one example. Adobe Fireflies is image generation that's been trained only on what is legally permitted. They own, obviously, Adobe Stock, as well as it's been trained on images in the public domain, that's all legal.
Another one is Adobe Acrobat AI Assistant I gave as an example earlier. I can't upload the manuscript of my book to ChatGPT because I don't own it; the publisher owns it. So, as handy as that would be to interact with the book that I wrote, I don't have permission to do it. And yet I can open a PDF of the book and interact and have a conversation with the book securely in a tool like AI Assistant.
So I've had the privilege of working alongside and helping promote and teach these tools to people. While Adobe maybe isn't first to market with a lot of these tools, I would say they're very early to market without compromising safety and security. And I'm very, very passionate about that, in part because it's been a huge part of how I've made a living and how I've set my clients up for success in their respective industries.
I remember last year, I think it was right at the end of February or very early March, there was a big announcement about Adobe and what they were doing with this. And I was pretty impressed at the time. I've heard a few things since then that have been a little concerning, and I don't know if these have gotten much attention. I pay attention because it's my area.
There was one news announcement that said that a small percentage, maybe 5 to 10% of the training base, did come from Midjourney, which was NOT trained on a full set of ethically-sourced material. And so there's a sense that it may have poisoned the enterprise safety of images generated with Firefly because of that.
The other thing that I had heard was - I interviewed a technical artist last year, Kris Holland. One thing that he said is that he'd had his content in Adobe for years, and it was used in the training without his consent. Some artists have said that Adobe didn't really do right by them in that regard. And I think those stories didn't get a lot of play, not nearly as much as the big announcement last February. Adobe has certainly done better in a lot of regards than some companies, but they still are not perfect.
Oh, yeah. I don't think anyone's perfect. I remember seeing a handful of those headlines. And what I appreciate too is that some of these instances or loopholes that are being discovered in the process, they're generating really important conversation.
So I don't work for Adobe, so I wouldn't be in a position to give an official response on what went wrong. I can say that it's led to some really important clarifications, Karen, like in the terms of use. There was, few months ago, some confusion around, “opting in” and certain language in that, that they actually adjusted based on people's response. When you are that big, there's gonna be a target on you. But I would say not just target. These are igniting important conversations. I appreciate that even if mistakes have been made, it is highly important and prioritized to do right. And I think, going back to what I said earlier, it's not about being perfect. It's about us navigating this world with transparency and humility. And even a company that big, I think they're doing that well.
It was very good to see that they did change their position about the opt-in and some of the language and the terms.
That one was really confusing. To synthesize that, basically, the way that it was written, it sounded like, to some people, you had to, like, opt out, or the model would be trained on your content. But, actually, it was wording that was improved that required a certain level of opt-in for the features built into the tools to actually function. And it wasn't about training the model on intellectual property. So I think, yeah, it's requiring humility. It's requiring conversation, transparency, and action around some of these things.
And the other thing I would say is, like, there's a lot of anger, justifiably. It's a lot of rapid change. There are people without jobs or there are people that have been impacted positively and negatively by this, and there's just a lot of anger being hurled around.
And so it's understandable that emotions are high. But at the same time, I'm really an optimist at heart, sometimes to a fault. I think there are a lot of things that are very exciting about this new age. For people like me, and for people like you, that are small business owners, it's really quite exciting.
You know, change is always inherently disruptive. And I think there are a lot of good intentions, and a lot of people who are interested in adapting. But they want to do it in a way that makes sense for them and makes sense for the rest of the world. And so it's always good to see some companies who are at least trying to do the right thing and, as you said, responding when problems are flagged to their attention.
Yes.
Still a ways to go.
You've mentioned a lot of AI-based tools that you have used. And I'm wondering if you feel like the tool providers have been transparent with you as a user about sharing where the data came from that they used for building the models that went into the tools? And whether or not the original creators did consent to its use?
It's a great question. I'm an optimist, and I like to give people the benefit of the doubt, but this is a race. And the cost, in some instances, of being first or being early, you know, is taking shortcuts. So it is a concern of mine. It's definitely a concern of mine. At the pace at which we're moving and even the requirement for me in creating content, there's a certain level of trust that I put into people, just like when I'm doing business, independent of AI or technology.
But it's a concern of mine. It's something I think a lot about. And so, it's almost something that I want to, like, formalize as part of my evaluation, almost like a rubric or criteria that I'm able to evaluate when I say yes to a brand deal. I've also said no to a brand deal. If I'm not 100% clear on a company's ethics or values, then I just won't take the risk. I don't want to recommend something.
But at the same time, even since I wrote the book and it's been published, some of the tools have actually gone under. Like, they're defunct now. And I also knew that was coming. So at the very end of the book, I say, “There's a chance that companies have changed, restructured, or rebranded since this went into print. So go to my website, where I'm able to control a little more my tool directory and what I'm recommending right now.”
I knew that would happen, and people are like, “How did you write a book about AI in the traditional publishing route?” And my answer is, “I did the best I could, you know?” I just made my best effort to do the best job I could, focusing on what is most evergreen, which is not the latest advancements in chip development. It's tools and workflows and examples.
As consumers, or as members of the public, our personal data has almost certainly been used by different AI-based tools or systems. I'm wondering if you know of any specific cases that you can share? And, obviously, without disclosing any sensitive personal information.
One that concerns me, someone who works in branding and works with creatives, big media companies that are scraping websites like Behance, which is a website now owned by Adobe. But it's the one that I used to start my own branding agency and even hire my graphic designer in the first year of forming my company. But there are instances where images have been downloaded, at scale, from tools like that. Because, obviously, there's high-quality visuals that would be perfect for training a model. But that's been done without permission, and it's highly concerning. If we had to make a prediction, there's gonna be even more instances of this, this year and moving forward.
One of my huge concerns is privacy. I mean, deep fakes is very scary to me. They're getting very advanced as we see advancement this year with AI video. Deep fakes and privacy, you know, so willingly giving personal information and then having that end up in a place that you don't want it, is a concern to me. At all levels - not just enterprise - also, you know, small business.
Yeah. That's one thing with the terms and conditions that you mentioned earlier, with regard to Adobe. They're not always written in such a way that people can actually understand what they are and aren't opting into. Or they're not written for people to understand. They're 10, 20 pages, and over 90% of people don't read them. And, honestly, if they can't be understood, then I can't really fault them for not reading them.
I know! If you have to be a lawyer to be able to understand the terminology - not just understand the terminology, but understand, like, the legality around how something is being stated, it's tough. It’s tough. I think we're all doing our best, but it's a new frontier. And it is kind of scary what we're opting into without our knowledge.
So in cases where you've looked at a tool - and I don't know if this comes up in your evaluation - I don't know if you ever have looked at terms and conditions, and what they're saying about how they would or would not protect your intellectual property, what you're building with the tool. Do you have any concerns in that area that you've noticed? Or have you seen anyone who's doing it well?
Yeah. That's a good question. I haven't gone super deep with training my own model that would include sensitive information. I'm naive, in some respect, to this battle between, how do you keep up with technology, but how do you also stay compliant and respect privacy? I've had so many people say to me, “Phil, I love all the tools and apps that you share, but my company won't let us use or access anything like that.” There are companies that won't even let employees access ChatGPT on a company computer.
By nature of my work, you know, I haven't had to deal with a ton of that. A few clients in health care. But even then, it's like website creation, social media strategy. This stuff is generally public-facing. So I haven't yet. That's not as much of my world, but it's something that I empathize with clients or people that are dealing with that, because it's really tricky.
Yeah, I interviewed someone who works with advising health care startups. And one thing she pointed out was, with HIPAA and the medical records, that a lot of the information that is most valuable to data brokers is not so much your diagnosis, but it's all the personal information about you, and who your relatives are, and everything else.
Yes!
They use that to connect people and draw conclusions. And that data is in some ways more valuable to a broker than your actual medical condition. They may not care about your medical condition. Other people do, because they want to market to you.
Exactly. Yeah. Exactly. I mean, we can all also relate to this idea that, like, you know, you're having a conversation with a friend about, “Oh, I want to travel to Italy before the end of the year.” And then all of a sudden you hop on Instagram and you have an ad for Italian vacation packages. It's tough. I don't have the answer for this, but it's a question that lingers in my mind, and it's something I'm concerned about. And I feel for people that are also trying to navigate this.
It's scary. I think we're gonna see a rise in ‘no-technology’ environments, like people going on retreats or people, like, disconnecting from technology, and I think that's also wonderful. I worked really hard last year on a variety of projects, and I essentially took November and much of December off. I mean, I'm a small business owner, so I don't actually fully shut off anytime. That sounds more stressful to me! But I was very conscious about just really taking time to, like, reset and come into the new year fresh.
But I think we're gonna see that. It is this kind of balance, right? It's like leaning in and technology and staying up to date and getting through our to-do list, but then also intentionally disconnecting. And I think it's great.
And you mentioned about the ads related to things that you said when you weren't online. I don't know if you heard the news that was just broken on January 2nd about Apple settling that lawsuit, with regard to how they were using Siri to capture conversations, even when people had Siri turned off, and then selling that data to advertisers. I mean, Apple is one of the companies that I had thought was more concerned about privacy and doing things the right way, and then this is disillusioning to hear that they've been doing this for many years.
It's disillusioning, and it's upsetting because, again, it's this fight to be first, but at what cost? A cost that they've likely budgeted for in their legal - it's really concerning. It feels almost like if a friend violates your trust. It feels similarly. We rely on these brands and these tools, and we trust them. We spend a lot of money on them, and it's tough. I think we're gonna continue to see that, unfortunately. That's definitely one of the downsides. So when you're first in the race, some people get there by cheating.
That is true. Yeah. So one thing I think that we're all noticing is that there's been growing distrust by the public of these AI and tech companies. And I think in a way, it's healthy because we're becoming more aware of what exactly they're doing with our data and what types of things they're doing with me opting into using something, or using a website or allowing our pictures to be taken, things like that. So I think that distrust is actually healthy, and the important thing in that is that we do something about it. And that we, as consumers, exert the pressure to say that we want them to do something about it. So I'm curious what your thoughts are about that, and what you think these companies would need to do to, say, earn or earn back, and then keep, your trust?
Yeah. That's the golden question right there. If I had the answer to that, I feel like I'd be retired on an island somewhere. So here's what I'll say. I appreciate people like you that are concerned about this, and even creating content around this discussion. Because I think, yes, it's unfortunate when these things happen, but as you said, if it ignites a conversation, that's enlightening and it's productive. It's hard. Yeah. If it leads to a conversation, but then a company also demonstrates that they take action, I think that's what we need to come to expect. It's not just the talk, but also the execution. If a brand has done wrong, then what are they doing to make it right?
Again, you know, my world is a lot of Adobe, and I think they know they made some mistakes, but they took swift action. They included the community around that dialogue. And people have felt much better about the progress made as a result. So I think we're leaning on community. We're leaning on action, and I think consumers are expecting that, and they should.
One thing you mentioned about the companies that are sacrificing things and cutting corners in order to get to market first. One thing that I'm always looking out for is trying to find companies that are doing things ethically or trying to do things the right way. And the vast majority of AI tools out there weren't trained on ethically-sourced data, but there are some that are. So I always try to give them a shout out if I can find them.
That's wonderful.
One thing I remember Melanie mentioning, from the session that you had with the graphic designers, was that you were identifying some tools that you felt were more ethical or that were safe for them to use. So I like to see that. I'd like to hear a little bit more about your book, actually, and talk about that aspect of it.
Sure. So, I was actually overwhelmed when they asked me to write a book. Because I told you, writing is something I can do, and it's something I did in university. I did self-publish, back in 2014. I have a book about Twitter, actually. But writing is kind of secondary to what I do day-to-day, which is, “Okay, maybe write a script for a teleprompter.”
But, yeah, much of my world is presenting video production and delivering information on the stage. But in that moment of overwhelm, I leaned into what excites me, and what I do know, just from practice. And recommending tools that I've tried, and even tools, if I haven't tried them as much, because there's over 150 in the book, that I've at least researched enough to know that it's one that I feel good recommending.
You know, there's only so much I can do as an individual, as a small business owner, in those recommendations. But I also feel, Karen, like because I've been doing this for 3 years, I can leverage the time that I've taken to try a tool, recommend it, feel good about the conversations I've had with the brand. That's not something that I think I could have done in a year at the scale that I was able to with this book, because it's been something I've been chipping away at every single week. The book was equally a research project, as it was a thought leadership exercise. Because there are later chapters in the book that, you know, aspects of business that I really don't know that much about.
And so I took a lot of time to learn, to try, to have discussions with tools, with clients of mine in a variety of industries, to be able to work out, “Okay, here's an example of a workflow that would benefit someone.” So the earlier chapters were much easier. Talking about social media, marketing, even operations, sales, these are things that have been on my mind as a small business owner for well over a decade.
But other chapters like, data analysis, security, research - those are things I haven't done as much in my work. So I really enjoyed the process of learning, I guess? It sounds funny, because normally people write a book about their expertise. But in this case, a percentage of it was my expertise, but also a percentage of it was a learning moment.
And I feel really good about where we landed on that. People think, “Oh my gosh, writing a book, that's such a big project.” But, actually, the writing wasn't the biggest part of the project. It was the researching, you know, fact checking.
I signed a contract also, as part of this deal, that I didn't use AI to write the book. First thing I did was, when I got the book deal, I ordered about 4 or 5 books from Amazon on AI, and I flipped through them. And my lord, they were bad. It was very clearly quickly AI-generated text with no soul. And I swore from that moment, you know, even if it takes longer, I need to negotiate with the publisher. Like, I just wouldn't create that. I still want it to be in my voice, and I want to leverage what I know, and also what I don't know.
And so that's how I would describe the process of the book. I still find the idea of AI or being an expert in AI overwhelming. I have imposter syndrome on that. But I think in a way it's actually good, because it keeps me humble, and I think we should all be humble, as we navigate this.
It was interesting that your publisher stipulated not to use AI for the writing. Do you know if they have any deals in the works or any policies regarding whether or not they will allow the content of your book to be used for training some other AI-based system?
Yeah. That's a really good question. I signed that contract about a year ago, so I'm sure the stance has slightly changed. They have also said to me, like, “Phil, even we're experimenting with some AI tools, within our own workflows of reviewing and analyzing, et cetera.” I did negotiate with them because I said to them, from the early days, “If I just sign this, I'm not going to be able to write the best book that I'm capable of writing. How do you write a book about AI, and not use AI in its creation?”
So we did land in a place that was a happy medium. We landed in a place where I was able to include examples of prompts, outputs based on inputs that I created, even research on tools. You have to use AI to verify some of that information. So I'm happy with where we landed on it.
But at the same time, they know that there's opportunities like you've described, where it could be used. But the publishing world, they'll even admit it. It's so slow, slow, slow. So, I don't even know if they're that far yet.
Yeah, I’m just curious because I've been hearing some news about different publishers that are making deals with the large AI companies to license out the content of the books that they, I'll say, “own” - the ones for which they are at least a rights holder in some regard. And some of the authors either weren't allowed to opt out, or given a choice about opting in, or weren't directly compensated.
One author, , she's written a lot of great books. And I think 3 of her books at least were used by her publisher, and made available - the whole content - for training an AI-based system without her permission. She had evidence that someone used a Google AI tool, and it came back with a huge snippet out of her book, with no credit whatsoever. And she had no control over that. And that's pretty distressing for an author.
It is distressing. But I'm also aware of, like, how it's evolving quickly for search engine optimization. There's a certain amount where you don't want to be found, like, if it's intellectual property or, you know, courses, books, things that are sold at, let's say, a premium or as a product.
Again, I'm an optimist, Karen. It's actually a really exciting time to be found as a thought leader and to have your information. The attribution is definitely a concern. But I'm also happy to see that the world is shaken up a little bit in terms of search engine optimization. Actually, it's kind of archaic. Like, how do we rank on the first page of Google?
Now the new version of this is, like, how do you strategically position your content to be found in generative AI tools? It's interesting. All of these stories, there's gonna be so many of these, like, landmark cases that hopefully set a precedent that respects the creator - be it an author of a book, be it a graphic designer of a branding project.
We're gonna see this pop up across the board, and I'm hopeful. You know, the legal system also moves at a pretty slow pace, but, hopefully, there are these parameters put in place to protect people. I think that's the human part that we can't miss.
I've seen an observation that when Google was indexing everything in the world, they were using the search, and it drove people TO the book or TO the site. And so it was a benefit to people to have their content indexed and searched.
But when they're providing that content without a citation, without a credit, without a link, then it's basically just stealing from those people.
Exactly.
They're not getting the exposure that they ought to be getting, that they deserve to get. And that's where we need better solutions. There's one group that calls it the 3Cs. It's the Cultural Intellectual Property Rights Initiative, CIPRI, and they call it the 3Cs of creative rights - the Consent, Credit, and Compensation, that all creatives are entitled to.
And we need to have, one is regulations, but the other is the mechanisms for supporting this. And when we use content for training an AI-based system, that we ensure that we can and do give the creators who provided that content the right to Consent or not, give them Credit, and Compensate them appropriately. And it's technically hard, but, you know, AI is already hard. So being able to do that as well is certainly feasible.
I mean, another good example is the Content Authenticity Initiative, which was started by Adobe, but really has also become a force in the world of attribution and content credentialing. And I'm really excited about the advancements that have been made on that. So metadata that actually tells you images that were sourced in the creation, the transparency around that.
Actually, Synthesia, I gave as an example. Years ago, I said, “How do we make sure that my avatar isn't used for government propaganda in another country?” You know? And they said, “We're a part of this organization called the Content Authenticity Initiative, along with 800 other organizations. And this is our commitment to transparency in the use of AI.” And that was before I was even working with Adobe. And now that program, don't quote me on it, I think it's over 3,000 companies and media organizations, large ones around the world.
I've joined the Content Authenticity Initiative as an individual member, but it is a commitment to transparency in the use of AI. And so, yes, as all these exciting developments pop up, we need to counterbalance it with safeguards, safeties for the types of people that you've outlined as well - authors, creators, intellectuals, any of these people.
I had read about the Content Authenticity Initiative, and I think there are 1 or 2 others similar. I hope they'll converge to where we have interoperability across them all and not a fragmented set of systems.
Yeah, I want that too. Yes. I think it will happen. It’s just, some of this stuff takes time, which - it's daunting to reflect on that, at the speed that it's evolving. Yeah, it's a concern of mine. We need the safeguards in place almost quicker than the technology evolving itself.
Yep. Alright. Well, thank you so much for making the time for the interview with me today, Phil! I appreciate it. Is there anything else that you'd like to share with our audience?
No. Other than, I really appreciate that you're asking a lot of the hard questions. I don't know if it makes for a good podcast interview. I think it does, but I am just as quick to say, “You know, I don't know the answer to that. It's a really good question.”
Because the creation of content like this, and the having of conversations around not just the sexy evolution of this AI platform tool, we need to be asking the hard questions. So I really appreciate the way in which you focused on that, and help increase people's awareness of “the good, the bad, and the ugly”. Because I think it's going to lead to a well-balanced, you know, or better-balanced world, where people can enjoy the use of this technology and reap the benefits, but still make a living and be happy in their jobs.
Yeah. The awareness is one of the goals from this series! The other part, I think, is just there's so much hype around AI and, oh, it can do all these things. Everything about AGI and how it's going to solve everything for us. I think the reality is a lot more interesting and complex than that, in terms of how people are using it, and have been using it in some cases for years - using machine learning, using algorithms, using recommender systems and optimizations and everything else that falls under the AI umbrella. And I think it's important for people to say, “Okay, well, all that hype, that's not real, but this other stuff is real. This is what real people are doing with it and how they're using it in different countries and all, maybe different than how we're using it here in the US and how people in different industries are using it.” So I think there's not as much visibility into that.
I love hearing how you're using it for your branding and marketing as your strategy business, and that's a great set of insights for me. Yeah, that's not a world that's been familiar to me, so I appreciate you sharing that.
That's really cool. Well, equally, I appreciate, as I said, the work that you're doing, and I've also learned from you. And that's, I think, why I articulated that. Like, that's, I think, the right attitude. This is such a BIG world with so many facets that there's so much for all of us to learn. I will get on stage, and I'll field questions, and I might be able to answer half of them. But the other half, I might not know the answer. And sometimes all I'm going to say is, “That's a really good question. You know? That's something we should be tracking.” So that's a really cool output from this, and I really appreciate it.
Great. Well, thank you so much, Phil!
Thank you!
Interview References and Links
Phil Pallen on LinkedIn
Phil’s book “AI For Small Business”
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Audio Sound Effect from Pixabay
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊
Share this post