AISW #008: Yanmeng Ba, Canada-based product manager 📜(AI, Software, & Wetware interview)
An interview with Canada-based high-tech product manager Yanmeng Ba on her stories of using AI and how she feels about how AI is using people's data and content.
Introduction - Yanmeng Ba
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview - Yanmeng Ba
I’m delighted to welcome Yanmeng Ba as our next guest for “AI, Software, and Wetware”. Yanmeng, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I was born and raised in China and am currently living in Ottawa, Canada. I am doing product management in high tech companies and am interested in technologies in general and its relationship with business. I have been amazed by the recent break-through progress made by AI and have been closely watching how AI could fundamentally change our lives.
Thank you for that background, Yanmeng.
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I’m a newcomer to AI. I have been using ChatGPT since it became available in Canada, for around more than a year, and am interested in trying out other new AI tools.
Can you share a specific story on how you have used AI or ML? What are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?
The only paid AI product I use is ChatGPT. I use it a lot.
One example: As my trip to Paris approached, excitement filled the air—but so did a bit of anxiety. With only a few days to explore one of the most beautiful cities in the world, I wanted to make sure every moment counted. I didn’t want to miss any must-see landmarks, but I also wanted to discover hidden gems and experience Paris like a local. That’s when I decided to enlist the help of ChatGPT to craft the perfect itinerary before I even packed my bags. ChatGPT did a great job drafting the itinerary for my 6 days in Paris. I could also make ChatGPT give me a customized format depending on which info I would need for the itinerary, e.g. the hours of the attractions, the recommended visiting time and if it is included in Paris Museum Pass, etc. ChatGPT greatly reduced the time I spent on planning and became the best travel assistant I could get.
As another example: ChatGPT does a way better job explaining things to my 7 year old daughter than I do. For example, I know what "anxiety" means, but I don't know how to explain it to my 7-year-old. ChatGPT does a great job explaining it to my daughter with more details and examples. It can be used in a conversational context and could be a personal tutor in a way. If ChatGPT has a tutor-robot in the future that can interact with children, talk to them and answer them in a more natural conversational way, I will definitely get one. Currently, it is more in a mode of listening, then pause, then generating answers, which is a little bit less natural than real conversation.
Those are two great examples of effective ways to use a large language model tool, Yanmeng. Thank you for sharing them.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
In situations that require a personal touch, such as writing a heartfelt letter to a friend or having a deep conversation, I choose not to use AI. I believe that genuine human interaction is important, and AI might lack the emotional nuance or understanding needed for meaningful communication.
That’s a sentiment I’m hearing a lot.
A common and growing concern nowadays is where AI systems and tools get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training? (Examples: musicians, artists, writers, actors, software developers, medical patients, students, social media users)
I believe that AI companies should be transparent about what data they use and how they use them to train their AI models. They should follow the 3C principles in this process:
Consent: Artists should be asked for their permission before their work is used to train AI systems. This ensures that their rights and intentions regarding their creations are respected.
Compensation: Artists should be fairly compensated for the use of their work, acknowledging the value they bring to the AI training process.
Credit: Artists should be credited for their contributions when their work is used in AI systems or outputs. This not only recognizes their work but also maintains transparency about the sources of inspiration and data used in AI models.
I agree completely - lots of people are advocating now for the 3Cs, or in some cases, 4Cs (consent, control, credit, and compensation).
When you’ve used AI-based tools, do you as a user feel that you’re able to know where the data used for the AI models came from, and whether the original creators of the data consented to its use?
No, I don’t think the AI tools that I have been using explicitly provide this type of information.
That’s not surprising. Many AI tool providers are not transparent about sharing this information.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I haven’t encountered any cases where my personal data is used in AI-based systems.
I really think the way that some facial recognition systems use people’s personal data can be scary and should be proceeded with caution. It has some very legitimate concerns around ethics.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI? If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs? Overall, how do you feel about how your info is being handled?
I haven’t received any such notifications, nor did I find out any company was using my info for AI. Maybe my personal info has already been used somehow … I don’t doubt it. It is just that I am not explicitly aware of that.
Yeah, it’s pretty hard to avoid AI nowadays, and it’s even harder to know all of the data brokers and others who are using and selling our data.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
The most important thing AI companies need to do to earn and keep public trust is transparency. I wrote a few paragraphs around that topic and used ChatGPT to improve them~~ It definitely did a great job on refining it:
Transparency in how AI systems are developed, how training data is collected and used, and how decisions are made by these systems is crucial for building trust. When AI companies are transparent about their processes, it allows the public to understand how AI systems work, which reduces fear and misconceptions. It also ensures that companies can be held accountable for their actions and decisions, fostering a sense of security among users.
Transparency can help ensure that AI systems are developed and deployed in line with ethical standards. When companies are open about their methodologies, it’s easier for external auditors, ethicists, and the public to evaluate whether the AI is being used responsibly and fairly.
How AI companies can achieve transparency:
Open Communication: AI companies should clearly communicate how their AI systems work, what data is used, and how decisions are made. This could be through detailed documentation, public reports, and user-friendly summaries. They should also be open about the limitations and potential biases of their AI systems.
Regular Audits and Reporting: Implementing regular, independent audits of AI systems and publishing the results can help build trust. These audits should assess the AI’s fairness, accuracy, and adherence to ethical standards. Reports should be accessible to the public, outlining both strengths and areas for improvement.
User Control and Informed Consent: Companies should give users control over how their data is used and ensure that they have a clear understanding of what they are consenting to. This includes easy-to-understand terms and conditions, opt-in and opt-out options, and mechanisms for users to manage their data.
That’s a good example of how you use the AI tool for a non-personal communication, and being transparent about its use. Thank you for sharing that.
Anything else you’d like to share with our audience?
Yea, using ChatGPT to refine texts is another use case I use quite often. It is a language model to begin with (LoL) and is supposed to excel in words and texts.
That’s a wrap! Yanmeng, thank you so much for joining our interview series. It’s been great learning about what you’re doing with artificial intelligence tools, how you decide when to use human intelligence for some things, and how you feel about use of your data!
References
Yanmeng Ba on LinkedIn
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are welcome & appreciated, and shares/hearts/restacks are awesome 😊