AISW #002: Anonymous1, USA-based medical student 📜(AI, Software, & Wetware interview)
An interview with an anonymous 2nd year medical student in Pittsburgh, PA on their stories of using AI and how they feel about AI using their data and content.
Introduction
I’m delighted to welcome our second guest in this 6P interview series on “AI, Software, and Wetware”. They have chosen to remain anonymous. Today they’re sharing with us their experiences with using AI, and how they feel about their data and content being used by AI.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview
Thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I am currently a second-year medical student at the University of Pittsburgh School of Medicine. A typical day for me includes attending class most mornings, with the rest of the day spent typically self-studying to stay on top of schoolwork.
Growing up in the suburbs just outside of Pittsburgh, I feel very fortunate to attend school in my hometown and look forward to continuing on a career path as a physician that offers such a unique and fulfilling opportunity to help others.
Pitt has a great medical school. Congratulations to you on your studies!
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, studied the technology, built tools using the technology, etc.?
I don’t personally have significant experience using artificial intelligence or machine learning, but there have been many times where friends and classmates have expressed how useful AI has been for them. Specifically, I know many who use AI to aid in conducting scientific research. As a medical student, I have also been involved in clinical research over the past several years and have seen this AI use increase firsthand, particularly chatbots like ChatGPT.
However, I haven’t found a consistent use for in my experience with clinical research. My main complaint with AI in general is the frequency that it provides a response or answer that is either partially or entirely incorrect. I find that is a significant enough drawback that it gives me hesitation if I am using it strictly to help me answer a question that I do not know the answer to.
It’s smart of you to be cautious about inaccurate answers! Especially with generative AI tools like chatbots or search.
Can you share a specific story on how you have used AI/ML? What are your thoughts on how well the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
I do find that AI can be extremely helpful for explaining a topic for me when I do ultimately know the end “answer”. This is a situation that my classmates and I run into on almost a daily basis when studying in medical school – we know the question, we know the answer, but for a complete understanding of the content we need to fill in the gaps of exactly why that answer is correct.
With the foundation of knowledge we have, I think AI like ChatGPT often does a good job filling in these content gaps for us while we’re studying. Still, I prefer to be cautious with it and typically use AI only when I have a solid knowledge base on the subject I'm exploring.
That’s a great insight into a way AI tools can be useful even when we know it’s not wise to trust them completely.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you didn’t use it?
I typically still avoid using AI as a significant aspect of my research work, as I find it hard to justify relying on it when I know that any research that I am a part of needs to be entirely original work conceptualized, performed, and analyzed by me and the research team. I do, however, think AI can be helpful for generating broader research ideas or brainstorming strategies for analyzing data in some cases.
Limiting your use of AI tools to inspiration makes sense.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training? (Examples: musicians, artists, writers, actors, software developers, medical patients, students, social media users)
I think that, in general, it should be a requirement that companies should be as transparent as possible to the users and general public in every aspect of training the AI/ML systems. Admittedly, this is not my expertise, so I would have to learn more about how these companies use people’s data for training purposes to make a decision on whether or not these users should be compensated.
I completely agree with you that transparency is important!
As a member of the public, there are probably cases where your personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I can’t say that I’m aware of a specific instance of this, but particularly with the popularity of social media these days, this is something that my friends, family, and I have thought a lot about in recent years.
Although I’m not regularly active on social media, I do have accounts for many of the large, well-known social media platforms—YouTube, Instagram, Twitter—mostly for staying up to date with news or keeping in touch with friends who I may not see on a regular basis. Even just my personal information like my name, date of birth, hometown, school is linked to each of these platforms. I can see every day the targeted advertisements and recommended media and news that seems personalized to me based on my interactions with content—all of this happening in real time and every day can be a little disconcerting.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI? How do you feel about how your info was handled?
Just recently, I was searching for a photo on my phone and was amazed to see that it had sorted many of my photos with friends by identifying who was captured in each picture. After doing some research, I discovered that this technology uses AI to organize and sort photos by people's faces, relying heavily on user-uploaded content for training.
While I find this feature pretty useful and definitely fascinating, it also raises concerns about privacy. It makes you wonder how much of what is on your phone is truly private and kept just between you and those close to you.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
While I haven't personally encountered a severe situation like this, I've definitely noticed a significant increase in phishing emails, texts, and phone calls over the past few years. It seems like almost every day I receive an email or text on my personal device that turns out to be a phishing scam. I think this issue is going to continue to grow bigger, and people need to be aware of and educated about it to protect themselves.
Public distrust of AI companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I think the most important thing is transparency. Companies must be transparent with all users (and even the general public who may not regularly use AI) on exactly how the AI model is built, trained, and how the AI software itself works.
Conclusion
And that’s a wrap. To my anonymous guest, thank you so much for joining our interview series! It’s been great hearing about what you’re doing with artificial intelligence, and why you still use human intelligence for some things. Best of luck to you in your medical studies at Pitt and in your career as a doctor 😊!
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read. To automatically receive new 6P posts and support our work, consider becoming a subscriber! (If you like, you can subscribe to only People, or to any other sections of interest. Here’s how to manage sections.)
The article was a fascinating read—I'm am dismayed by what seems to be a lack of utility the medical student was able to derive from AI tools. It's quite straightforward to ask ChatGPT and similar models to pull excerpts from scientific papers to support an answer or provide relevant text chunks for specific questions. While hallucinations can occur, detecting them is often a fairly mechanical process (most of the time.)
I believe it takes significant training, experience, and creative thinking to use tools like ChatGPT effectively. Those of us who can code and develop custom tools on top of large language models are at a distinct advantage. For instance, I can create processing pipelines that check for hallucinations, automate much of the prompting, and format results in a way that directly supports my research.
Great post, Karen. It was fascinating to learn that for a med student, using AI to plow new knowledge ground is treacherous—enter at your own risk. But if one has expert knowledge, a bot can help fill in empty slots. This point squares with research that finds experts in an area use bots more productively than novices and underscores the importance of understanding more about bots and novice learners. Thank you for this.