AISW #017: Steph Fuccio, Denmark-based media consultant 🗣️ (AI, Software, & Wetware interview)
An interview with media consultant Steph Fuccio on her stories of using AI and how she feels about how AI is using people's data and content (audio; 18:18)
Introduction - Steph Fuccio interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and in our 6P external podcasts). Use these links to listen: Apple Podcasts, Spotify, Pocket Casts, Overcast.fm, YouTube Podcast, or YouTube Music.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Stephanie Fuccio
I’m delighted to welcome as our next guest for “AI, Software, and Wetware”! Steph, thank you so much for joining me today. Please tell us about yourself, who you are, and what you do.
Thank you so much, Karen. I'm excited to be here. I'm an independent media consultant, mostly working in podcasting, but then branching out to newsletters on Substack and YouTube as well to support the podcast. And I usually work with business folks who want a dynamic yet genuine online presence to attract the right kind of clients.
But I'm not just a solopreneur, I'm a content creator myself. I actually started creating content in 2017 and did that for a few years before I actually got into the media business. That's when I started building Coffeelike Media.
Because I started out as a content creator, I tend to use a lot of tools first on my own projects, and then I'll advise my clients on if it's something that is helpful for them and how they can use it and I can train them on it. And that's partly what I've been doing with Chat GPT over the past year or 2 as well. And I'm an American that currently lives in Denmark of all places.
That actually sounds very cool. How did you end up in Denmark?
I've actually been overseas for over 20 years. My husband and I met when we were both teaching English in Asia, and then we decided for a change of pace, we came to Europe just before the pandemic hit, and and he actually got a job 2 years ago in Denmark, and so we wandered over here. So that's why we're here right now.
Thank you for sharing that background, Steph. Very cool that you've lived abroad and you're now based in Europe. The AI regulations and attitudes are definitely different there than in the US.
So what is your experience with AI and machine learning and analytics? Have you used it professionally or personally? Have you studied the technology? It sounds like you use it professionally.
I dabbled in AI in a linguistics PhD program about 8 years ago. That usage wasn't too exciting for me. It was more of red penning in a more fancy way, someone's language to help them help them write better (in air quotes), and I wasn't too excited about that. It actually wasn't until ChatGPT came along that I started to really appreciate large language models for writing purposes. And I started to use them a lot for my own writing, whether it be for marketing copy or literally anything I write online. I actually use ChatGPT for a lot of it now, and I've been starting to help others do that as well. And now I'm branching out into things other than just writing.
It's that interactive part of large language models that really, really struck a chord with me. It wasn't just changing or correcting, but it was pinging off of each other. I don't know how else to say that. Like, prompt chaining is my favorite thing in the world right now 1.
Very cool. Can you share a specific story on how you have used AI? What are your thoughts on how well the AI features worked for you? What worked or what didn't? What went well and what didn't go so well?
Yeah. One of the first things I'd started to do was to take, like, for example, a podcast description. I know you know this, Karen, but a lot of folks who are listening might not be as familiar with the back ends of podcasting. But it's that description that you see when you first click on a podcast, like 3 or 4 sentences or so describing it.
And I would take that and I'd put it into ChatGPT. And I would do the prompt chaining and say, okay, ask me questions about this, so we can refine this and get as many of the keywords that need to be in there. Make it as clear as possible, make that first sentence really hook people in that would want to listen to it, not to deceive them or do kinds of things to play tricks so they'd come in, because they're gonna know it's not the podcast for them if I do that. But I wanted to have that back and forth with the description and refine it so that when people saw the description, they'd go, that's a podcast for me, and they're more likely when they press ‘play’ to listen and to want to be in that podcast ecosystem.
So that was one of the first uses that I did with it. A lot of podcasters also like to look at their podcast downloads quite often to see if there's an increase or a decrease or what's happening from episode to episode, and I've recently been playing with the visualizations that ChatGPT can do with tickets with ChatGPT 4o. You can upload a CSV file of data, and it will make visualizations for you. And that just blew me away how beautiful the visualization was.
It can be a little messy. If you give it everything, it will make everything, and then it's really hard to see the details. So you have to, as with everything with ChatGPT, give it a piece at a time and have it analyze it and visualize it, and then kinda come back and ask other questions and have it visualize that. But it's a really beautiful way for someone who's not very good with making visualizations to write or talk through really seeing their podcast data in a different way.
All right. Very good.
But even with all of these cool things, my favorite favorite thing ever is the voice feature on the mobile app. It's on Android and Apple. It it's on everything. And it's literally just talking to - I call ChatGPT Chatty. And so it's just talking to Chatty.
[Readers: See this video of Steph training someone how to use the mobile phone voice function 2.]
And, yeah, there's a lot of hallucinations, and there's a lot of miscommunication, so to speak. There's sometimes where it just kinda stops. Like, when I interrupt it and hit the white bubble that comes up when it's talking, and I'm like, that's too much or that's not what I want to hear, and I hit the white bubble. A lot of times it'll just pause indefinitely, and you have to kinda x out and come back in. But even with all of that, especially for talkers like me, there's something really beautiful about being able to say what you're thinking and talk through a problem or a question and have that kind of interaction come back at you.
Okay. Very good. And you mentioned hallucinations. It's definitely wise to be wary about that. It's really interesting to me to hear about how talkers can benefit from working through thoughts by talking with a large language model. I'm not that kind of a talker myself. I find the same benefits from writing. But it sounds like a useful application. And the way you're using Chatty, you're not asking it to generate words for you. You're using it to ask questions and help you clarify your thoughts.
I love that you said Chatty in that. Thank you. Yeah. Because there is a part of me that's, and I know you've covered this with a lot of past guests, that there's a certain discomfort with where a lot of the information's coming from.
Pretty early on, I made a decision not to necessarily do research or have it create things for me. I wanted to have it help me with what I already had. And part of that is with the the back and forth talking bits where I'm asking it to ask me questions, which still might come from somewhere, but it feels less slippery, less dangerous.
And part of it then is I'm feeding in my own content, whether it be transcripts or copy or something like that, and then asking it to change it according to things that I needed to do. Those feel like safer ways to use it right now while things are still being figured out.
And I would imagine that in the end that you feel like it still sounds like you.
Yeah. Because oh my gosh. I mean, ChatGPT clearly was trained on a lot of academic language, and the vast majority of what I do is not that formal. So I'm constantly asking it, no, less formal, less formal. And then it starts bringing up slang that I've never even heard of, and then I have to go, okay, wait, wait, wait. That was a bridge too far. Let's go back one. Because I I end up on, like, Urban Dictionary looking up what things mean. I'm like, how did you even know that?
Oh, that's funny.
So it sounds like you're making good use of the large language models and and the AI tools. Are there any things that you've avoided using AI tools for?
Absolutely.
Can you share an example?
Yeah. Absolutely. Except for one instance when the artwork first kind of went poof and everybody went, wow, this is amazing. I did like one profile picture and then just went, this doesn't feel right.
I just feel like the art side of it feels like one of the more slippery aspects of AI, where the training data is coming from. I mean, words are words, and we do use the same ones over and over and over, but an artistic creation feels more unique than that. And so I just haven't felt comfortable using it for any visuals. So I'm still in, I don't use Canva, I use Stencil. I'm still in Stencil making my own clunky artwork until everything gets sorted out, because I just don't really feel like that's a safe place for me to be yet.
Yeah. I agree. I'm staying away from those tools too, and I'm finding that a lot of people feel that way. Not just artists. And likewise, a lot of musicians feel the same way about their music, and many writers feel that way about their words. Although, you said you said words are (different), maybe there's a sense of it not being quite as unique as art. But everyone seems to really feel that. I don't know if it's self selection that the people that are agreeing to do these interviews already have that slant! But so far, almost everyone is saying that's where they draw the line - that they don't like or don't want to use it for art.
Yeah. I don't know what it is. I mean, it's true. Writers do have a unique tone, and you can actually ask Chatty, write it like this. Write it with that person. Write it like that person. And it can sound really close to their tone. But that still doesn't feel as slippery as artwork that might be a direct representation of somebody who hasn't authorized their stuff to be trained on. I don't know. It just feels different to me.
You've alluded to this a bit. There's a common and growing concern nowadays about where AI systems get the data and the content that they train on. So I've been really happy to see the emergence this year of ethically trained large models for various types of content, and I hope this continues to grow. But most of the big tools in the market today aren't ethically trained.
And what we're seeing is that companies often use data that users put into online systems or publish online, and they're not always transparent about how they intend to use our data when we sign up. So how do you feel about companies using data and content for training their AI systems and tools? Particularly, do you think that ethical AI tool companies should get consent from and compensate the people whose data that they wanna use for training?
This is such a sticky wicket for me. Yeah.
The thing is because I'm not a tech creator, I'm a consumer and on some level a coach or trainer or what have you, I don't really see the back end of things being created or shared or how they build up to the point of being of being decent technical tools that are not doing harm. And I get overly enthusiastic about something. So I have this back and forth thing in my brain where I'm like, yes. But when those consent boxes pop up, I am most likely, if I'm honest, one of those people who doesn't read everything and who clicks yes because I wanna get to the tool and play with it. I am unfortunately that person.
And I had a professor early on who said that when he has to do things that's sensitive, he unplugs his Internet. This is in the late nineties. So you had to plug it in and unplug it and that kind of thing, and he literally would unplug it, work on it, take that, put it on a hard drive, put it away, and then get back on his computer to do something else that was online. And he just said, assume everything is viewable, and I've never let that go.
And even though there's passwords and there's encrypted this and I've used VPNs in multiple countries, I still have the sense that everything I'm doing online can be massaged, used, manipulated, seen. I'm not comfortable with it. It just feels like one of the prices of all of this stuff that we're doing online. I don't know how to square that box, but that that's kind of where I sit. Like, I think that it should be obvious what we're consenting for, but I also assume that it won't be and it'll be used anyway.
Yeah. So the content that you're generating, the podcasts, you've kind of accepted that that may be used. For instance, there's a lot of stories about how companies have scraped YouTube videos and used them for training, so they probably picked up your podcast.
Undoubtedly. Undoubtedly. There have also been stories of entire podcasts being duplicated and then just re-read with an AI bot voice. And that is definitely a bridge too far because that's the exact same content. And that there's legal issues and all of that that are very clear, I think, at this point. But as far as what we put online and what's done with it afterwards when it can be scraped so easily, it's tricky. It's very tricky.
And you mentioned also about not reading the terms and conditions. The way that website terms and conditions are handled, it's really not suitable for true informed consent anyway. I heard recently about a case where some lawyers who were looking at AI tools to help them with law-related work didn't understand the ramifications of the AI-related terms and conditions on the tools that they were evaluating.3
Oh, wow.
Yeah. And if they can't understand, the hope for the rest of us is going to be a lot lower.
Right.
Getting to informed consent and opt in is definitely a challenge. Even assuming that intentions are good, if companies are being transparent in their terms and conditions, people being able to actually consent and say yes. I understand, I'm accepting that I'm giving you this information in exchange for - being able to use this feature of a tool, for instance.
Yeah. Yeah. And when possible with tools, I will opt in to the paid version because I feel like that would give me more protection than the free version, because “when things are free, you are the product”. Right?
So so I try to do that as much as possible, but it's such an interesting time right now. I'm very curious where we're going to be 10, 20 years down the line with all this.
You're speaking primarily as a user of AI based tools. Do you ever feel like, as a tool user, that you are aware, or that the tool providers are being transparent about, where the data came from that it was trained on?
I assume they will do what is good for the pockets of the people in charge. That's probably very pessimistic of me, but that's where I feel like capitalism leads us. The few companies that I have seen, not necessarily in the AI space, but that seem to be doing something as a passion project that turned it into a company, seems to be that only lasts so long. And then things go awry, whether it be when they sell the company to somebody else or they end up not being able to turn a profit and keep going and things go badly. So I'm not terribly optimistic about companies' intentions with all of this.
Yeah. There's a process that Cory Doctorow calls ‘enshittification’4 that a web company goes through. They set up a portal, they get people bought into the network effect. And then they start selling ads. And then the advertisers become the primary customer. And then it gets to the point where they are not even treating those customers very well. And I'm not going to name names, but you probably know examples of these platforms that have taken that path.
Experiencing this with the platform that I use on the daily, right now, as we speak.
And I just had a client who did an episode on this for one of his for his business podcast, and he was using fast food as an example of how not to treat clients and just the escalation of the price of American fast food over the past 4 years. And I was shocked to hear how the size of the food has gone gotten smaller and the price has gotten exponentially higher. I'm just like, there you go.
It just really seems to be the longer a product or service exists, the less valuable it is. And I I don't know why that is.
I think we've covered the questions then. Is there anything else you'd like to share with our audience today?
Despite all of my hesitation and distrust in companies and technology and the greediness of humanity, I actually am optimistic about AI and what these tools can do for us. And I really feel like right now, they can help free us up mentally to put our energies into other tasks. I feel like they can do the things we don't want to or help us do them in a way that saves our mental energy so we can work with other ones.
And this is where I'm pitching my AI tent at the moment. It may change in 5 minutes just like everything in the tech field seems to be. That's where I'm pitching my tent, and that's where I'm doing stuff in that space.
And that's what I'm covering in Chatty and Me, the podcast, Substack, and YouTube channel. And, I hope the folks are interested, they'll come over and check it out.
Steph, I’m definitely looking forward to hearing about your ongoing adventures with Chatty! Thank you so much for joining our interview series! It's been really great learning about what you're doing with AI tools and how you decide when to use them and when to use your human intelligence and how you feel about use of your data. So thank you so much for joining.
Thank you so much. It’s been fun.
Interview References - Steph Fuccio
“Chatty And Me” newsletter on Substack
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”!
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
References
References
Prompt chaining episode from ‘Chatty and Me’ podcast:
Training someone to use the mobile phone voice function
From “Ethical Risks & Challenges in GenAI music [Unfair use? series, PART 2]”, 2024-04-06:
“Even law firms don’t always understand AI software T&Cs on privacy and confidentiality, as reported by journalists Isha Marathe and Cassandre Coyer in this story about Microsoft’s Azure OpenAI and leakage of client confidential info. How could we possibly expect that most ordinary people will understand T&Cs for genAI music tools?”
Reference: Isha Marathe and Cassandre Coyer, LinkedIn post on Microsoft’s Azure OpenAI Service’s ‘abuse monitoring’ policy and its impact on law firms, 2024-03-20.
“The ‘Enshittification’ of TikTok: Or how, exactly, platforms die.”, Cory Doctorow / Wired magazine, 2023-01-23
Was great to chat about these topics with you Karen.
Chatty and Me Substack:
https://chattyandme.substack.com/