AISW #011: Riccardo Vocca, Italy-based marketing researcher 🗣️ (AI, Software, & Wetware interview)
An interview with marketing researcher Riccardo Vocca on his stories of using AI and how he feels about how AI is using his data and content (audio; 10:51)
Introduction - Riccardo Vocca interview
This post is part of our 6P interview series on “AI, Software, and Wetware”! Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and in our 6P external podcasts). Use these links to listen: Apple Podcasts, Spotify, Pocket Casts, Overcast.fm, YouTube Podcast, or YouTube Music.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview
Today I’m delighted to welcome as our next guest in this 6P interview series. Riccardo, thank you so much for joining me today! Please tell us about yourself, and who you are, and what you do.
Hi Karen, thanks for inviting me. I'm Riccardo, I am a Marketing Research Assistant and author of the newsletter , where I talk about the psychological, social, and relational aspects of AI. For example: how can AI help us when we have to buy embarrassing products? Or, we judge in the same way a person who created a work with the help of ChatGPT and much more. These topics are covered only through scientific papers. So I report insights and results from scientific papers that I personally read and naturally cover in my newsletter.
And that’s great - I like reading about the papers that you’re reading and the things that you come up with. I think that’s a very interesting perspective that you bring. So thank you.
What is your experience with AI, and machine learning and analytics? Have you used it professionally or personally? Have you studied the technology?
Basically I approached AI thanks to the professor I’m working with, who allowed me to read studies related to AI and especially consumer behavior, because I have a marketing background. Since it was a project related to management and marketing, and since I have a marketing background (I just got a master's degree in Marketing now), I focused on that perspective. However, I also had the opportunity to read a lot about AI and economics, psychology, and much more, thanks to my newsletter, because I try to explore many perspectives also for innovating something that I write in my newsletter. So I have often used and experimented with AI, especially in my newsletter, for example for trying to say something in English in a better way, or to structure the newsletter in a more effective way for readers, and so on.
Okay, very good. And congratulations on your master’s degree!
Ah, thank you!
Can you share a specific story on how you have used AI or machine learning? And what are your thoughts on how well the AI features of those tools worked for you, or how they didn’t? What went well and what didn’t go so well?
I love this question because basically I think the most personal and interesting case for the use of AI was my next trip to New York City that I am going to do next week.
Of course, everyone on a trip like that, New York City, and I’m Italian from Italy, so it’s a bit far from where I am now. Everyone on a trip like that has a series of attractions and things to do in mind, but you never know the distances and how do you do them day by day. And where to have lunch. Where to have dinner. If the national museum is, for example, far or closer to Central Park, and this kind of thing.
So instead of following pre-built itineraries, like influencers share the itineraries, and so on, I gave ChatGPT the list of things I wanted to do. Also attractions that are, for example, less known or more known, okay. And I asked it to organize based on the days and the time I had, putting in the same days what could be reached in the same area. Also including possible recommended restaurants, for example, or clubs, or any bars, or these kinds of things I wanted to visit. It was very useful and really funny, I must say, because I built this kind of perfect itinerary for my expectations and what I needed. So it was a very engaging experience, I must say.
Yeah, that sounds like a very practical application, so I’m very glad it worked out for you, and I know you’re looking forward to your trip!
If you have avoided using AI-based tools for some things, do you have any examples of when, and why you didn’t use it? Or do you really not avoid it?
As a general approach, I use AI when I found it useful or inspiring, or I think that it could stimulate something useful. Basically I have not a standard approach to avoid or to do things through AI. When I think it’s something stimulating, maybe it’s some sort of personal feeling.
Ok, fair enough!
One common and growing concern nowadays is where AI and ML systems get the data and the content that they train on. So they often use data that users put into online systems or publish online. And companies aren’t always transparent about how they intend to use our data when we sign up with them.
How do you feel about companies using data and content for training their AI systems and tools? Should ethical AI tool companies get consent from people, and compensate them, when they want to use their data for training?
I think the topic of intellectual property is a crucial issue and it should be treated with care. As also indicated in many papers I read by illustrious scholars, for example Luciano Floridi, one of the most important scholars in the AI ethics, we must focus attention on the ethical implications that these practices have. For musicians, for artists, for writers - I would add also for consumers, readers, and users in general.
I think that starting to talk to these categories maybe is THE step, and not one step, to begin to address this issue by OpenAI, Google, and these kinds of companies. For example, Google, I know that it is kind of an agreement to train its models on Google books. So each book uploaded, Google is allowed to train AI on it. Some agreements like this is a good starting maybe for the conversation.
Yeah, these are definitely very important considerations, so I’m glad you are thinking and talking about that.
Do you know of any company that you gave your data or content to that made you aware that they might use your info for training their AI systems? Or were you surprised to find out someone was using it for training AI?
I must be sincere - I am not aware of the use of my data by companies that deal with creating AI systems, like OpenAI - in this case, I say Hi to Sam Altman and Co. However, I know that in this sense that Meta, I remember, made a big announcement to users - announced in Italy and a big covering also in the news or in digital content. In fact, users have shown public anger for this. And basically Meta announced that for training of its AI, it could use the data on its platforms - so, WhatsApp, Instagram, Facebook. I don’t know the specific point, or specific data, or specific platforms that Meta referred to. It was a big deal for users. And I have deactivated the option in that case. But I think that disclosure of the topic is central. And it’s crucial to trust for these companies.
Yeah, that’s great. You all in the European Union, you’re lucky that Meta is required to respect your opt-out requests. We do not have that same protection here in the US. So I put in my opt-out request and I got a reply that they weren’t required to honor it. So I ended up deleting all of my FB posts and personal photos because I had no other way to stop them from using my content.
Yeah, there is this thing that Europe regulates now. Maybe this time it is good, in this sense.
What I think this points to is an example of how public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies would need to do to earn and to keep your trust? Do you have specific ideas on how they can do that?
Also in my newsletters, I covered many papers talking about trust. I think that trust is a very important topic. I do not know if it is THE topic, okay, but it is certainly among the crucial topics and the crucial issues.
And surely, trust, as shown by several studies that I covered also in my newsletter, has many and hidden implications for consumer choice. So it is a matter also on what consumers want from AI, and what consumers expect from it.
And this is why we need to work, maybe, on ethical issues, according to scholars. And not only those mentioned above, we need to bring the consumer closer, step to step. But we need also to work together with him, or with her or with them, and try to go beyond the high barriers that can exist, caused by low trust.
Thank you for elaborating on that, Riccardo.
Is there anything else that you’d like to share with our audience today?
Yeah, as a final note, I’m really happy to have this discussion with you, I must say. And I know that we have also topics in common with our newsletters. So if you are interested in topics covered by Karen regarding AI, related to AI, I would be honored to have you among my subscribers, subscribers of The Intelligent Friend, where I talk about the psychological, social, and relational aspects of AI only through scientific papers. There is sometimes my personal discussion or opinion, but basically it is based on scientific papers.
And if you like, we can also connect on LinkedIn! So if you want, let’s connect.
And finally, Karen, a special thanks to you for your kindness and being so proactive in this kind of initiative, that maybe are also symbolic of the very sense of Substack community.
Aw. Well, thank you. It’s really been fun and I’m learning a lot from talking to you and all of the other guests!
Thank you, Riccardo, for joining me today. And it’s been great learning about what you’re doing with AI, and how you’re using your human intelligence for some things! So thank you.
References
Riccardo Vocca on LinkedIn
Riccardo Vocca on Substack - writer of The Intelligent Friend
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber! (If you like, you can subscribe to only People, or to any other sections of interest. Here’s how to manage sections.)
Credits
Audio Sound Effect from Pixabay
Thank you a lot Karen, what are you doing with your newsletter and podcast is absolutely valuable and admirable!