AISW #001: Kathy Gerstorff, USA-based Guerilla Marketing Coach 🗣️ (AI, Software, & Wetware interview)
An interview with USA-based author, poet, blogger, and Guerilla Marketing Coach Kathy Gerstorff ('WriterKat') on her personal stories of using AI (audio; 9:35)
Introduction
I’m delighted to welcome Substacker as our first guest in this interview series! Kathy is a US-based independent author, poet, blogger, and certified Guerilla Marketing Coach known as ‘WriterKat’ 1. Her newsletter, Indie Author Insider, features news, events, and support for the business side of writing. Today she’s sharing with us her experiences with using AI to help herself and her clients.
This interview is available in text and as an audio recording (embedded here in the Substack post, and in our 6P external podcasts). Use these links to listen: Apple Podcasts, Spotify, Pocket Casts, Overcast.fm, YouTube Podcast, or YouTube Music.
Interview
Kathy, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I am a writer, poet, blogger, podcaster, and founder of the Indie Authors Association. I am currently working on my first historical fiction novel. I love to write and publish books to help good causes. I have published a collection of poetry, a children's book, a devotional, plant-based cookbook, and several anthologies including Break The Cycle, Naturally Yours, Remember When, Nature Matters, SCARS, Who Told You That, and Lighthouse 117. When I’m not writing, publishing, or helping authors, you can find me reading, swimming, hiking, gardening, or planning my next adventure! You can learn more about me and my creative projects at writerkat.com.
Your work in writing and publishing sounds awesome. Let’s talk more about that.
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, studied the technology, built tools using the technology, etc.?
I am an early adopter to new technology. I have experimented with AI and used it as a tool for self-publishing.
Can you share a specific example of how you have used AI for self-publishing?
I’ve used DALL-E to create book covers. Canva has DALL-E integrated which is convenient. I’ve used ChatGPT as a writing prompt to get past writer’s block. I’ve used Grammarly for proofreading and editing. I’ve also used Speechify to read my stories out loud which helps me catch typos. I use Wondercraft to create text-to-speech podcasts. I’ve also experimented with ElevenLabs, Scribophile, Sudowrite, Rytr, Jasper, MidJourney, and many other similar programs.
Wow, that’s a lot of examples, thank you!
What are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?
Wondercraft works great for me to help busy authors promote their books. It still takes time to assemble and edit the conversation, but it can be done at my convenience. Also, a lot of authors are shy and prefer to write out their thoughts rather than answer impromptu questions live on audio or video. I like having this helpful alternative.
When I first used DALL-E it was a bit rough around the edges, but like all AI, it has improved. It’s wonderful for authors who do not have the budget for cover design or illustrations. I still haven’t figured out how to get consistent designs on illustrations. I’m sure it’s just a learning curve issue.
I have played around with a few AI tools for writing books, but haven’t been satisfied with the output compared to the amount of input. It’s easier and quicker just to write it myself. I do like Marlowe to analyze my fiction novel and suggest what I need to improve.
Marlowe is new to me; I’ll have to check it out!
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online, and they’re not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI tools? Do you believe that they should get consent from, and compensate, people whose data they want to use for training? (Examples: musicians, artists, writers, software developers, medical patients, students, social media users)
I have mixed feelings about it. I have been writing online since the AOL dialup days. When I publish online, I understand what I write is available to the public. Of course I would love to be compensated if my work is used to train AI, but if it’s used for the greater good I’m willing to make the sacrifice. I am a technophile. I love technology. I know not everyone feels this way. I’m sure regulation will eventually catch up to this new use of technology.
As a member of the public, there are probably cases where your personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I get spam calls every day which means my personal data has been exposed. I’m hoping AI can be used to curb spammers and scammers!
Stopping fraud is definitely one of the strong application areas for AI and ML. It’s a bit of an arms race, though. And it’s hard to know exactly where those spammers get our info. About 5 years ago, I was at a home show and reported on what I found out then about what those companies do with the data they collect at those kiosks 2.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML, or was it a surprise? If so, were you credited or compensated in any way for the usage? How do you feel about how it was handled?
The only time I’m consciously aware of how my data is used is when I sign up for Apple Apps because it’s disclosed upfront instead of buried in the terms of service. I have never been compensated for use of my data.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
Yes. I'm constantly dealing with phishing calls, texts, and emails! Security breaches expose our personal information all the time. I get calls almost every day from someone wanting to help me resolve back taxes. I'm not aware of owing any back taxes, so that makes them easy to ignore. Then I sign on to my banking app and see a warning that scammers are pretending to be them via text and phone calls to access customer's bank accounts. I've had my bank account compromised before and it's very frustrating. They were slick though, taking a few dollars at a time instead of one big amount. Fortunately, they didn't get much money before I noticed what was happening. The bank shut it down immediately and put the money back in my account within a few days.
I read in tech goddess Kim Komando's newsletter that hackers now have millions of patients' medical information and are wreaking havoc in their lives. You can see if your health care provider was affected by visiting data.usatoday.com/health-care-data-breaches. Privacy and phishing are big issues that I'm hoping AI technology can be used to help resolve.
In the meantime, I'm hyper vigilant on downloading files, clicking links, and answering phone calls. It's exhausting and unfortunately part of our online world.
It sucks that you had to deal with your bank account being compromised. I’m glad it got resolved so quickly for you. That’s a great tip on checking if our health care data has been exposed. Thank you, Kathy!
Public distrust of AI companies has been growing. What do you think is the most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Transparency comes immediately to mind. I asked ChatGPT this question and it agreed that transparency is necessary. It had a lot more to say on the subject.
(Text of ChatGPT’s response is in the footnotes3; you can also listen below.)
Anything else you’d like to share with our audience?
AI is a tool that offers invaluable benefits for independent authors who have more time than money to get their story out in the world. AI is not going away. Authors will adapt or go the way of traditional publishing, which is kicking and screaming, while technology that created Amazon removed gatekeepers, leveled the playing field, and changed the game!
Google Docs Smart Compose suggested this edit to my response:
“AI presents invaluable opportunities for independent authors with limited budgets but an abundance of time, enabling them to share their stories with the world. The advent of AI is irreversible; authors must embrace it or face the fate of traditional publishing, which has been challenged by technological advancements that have empowered authors, broken down barriers, and revolutionized the literary landscape.”
AI has indeed revolutionized the literary landscape. Ready... or not!
Conclusion
It has indeed! I like your response better than what Google Docs suggested, though.🙂
And that’s a wrap! Kathy, thank you so much for inaugurating our interview series, and for sharing how you’re taking advantage of AI tools for yourself and your clients.
And folks: in the audio on this interview, the questions are my natural voice, manually recorded, and the answers are AI-generated text-to-speech audio that Kathy generated with Wondercraft AI 4. Wondercraft is one of the 3 ‘voice cloning’ tools we covered in our May 10 article 5. Kathy also used different, standard Wondercraft voices for reading the snippet from Google Docs Smart Compose.
How many of you listeners would have guessed that Kathy’s audio was AI-generated? (If you’ve been reading, give it a listen sometime. The sound quality is pretty impressive.)
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber! (If you like, you can subscribe to only People, or to any other sections of interest. Here’s how to manage sections.)
Credits and References
Kathy Gerstorff’s website (writerkat.com)
Here’s how ChatGPT responded to Kathy’s question about transparency. You can listen to the audio Kathy generated with a standard Wondercraft voice here (length: 1:49).
“Public distrust of AI companies is a significant concern, and rebuilding trust requires a multi-faceted approach. Here are some important steps AI companies should take to earn and keep trust:
Provide more transparency into the algorithms and data sets used. This includes publishing the methodologies, decision-making processes, and the data sources.
Allow for third-party audits of AI systems to ensure they are operating as claimed and are free from biases and other issues.
Establish and adhere to clear ethical guidelines for AI development and deployment. This includes creating frameworks for responsible AI use.
Proactively comply with relevant regulations and work with governments to shape effective policies that protect users.
Implement robust data security measures to protect user data from breaches and misuse.
Give users control over their data, including options for data access, correction, and deletion.
Regularly audit AI systems for biases and take steps to mitigate any found. This should include diverse data sets and inclusive training processes.
Ensure diverse teams are involved in AI development to bring multiple perspectives and reduce biases.
Communicate clearly and honestly with the public about what AI can and cannot do, its risks, and its benefits.
Engage with the public through education initiatives, open forums, and transparency reports to build understanding and trust.
Design AI systems with a focus on ethical use and societal benefit, avoiding applications that could cause harm.
Prioritize user interests and well-being in AI development and deployment decisions.
By focusing on these areas, AI companies can work towards building a more trustworthy relationship with the public. Implementing these practices not only fosters trust but also ensures that AI technologies are developed and used responsibly.”
What a great first interview, Karen. 🔥Thank you. 🙏 I’m excited for this series! ❤️
I really enjoyed this interview—it highlights the benefits of early AI adoption and shows that people with this mindset are making it work.
I also wonder if the competitiveness in the content creation industry pushes writers and creators to embrace new tools to stay ahead, rather than clinging to traditional methods.