An interview with an anonymous 2nd year medical student in Pittsburgh, PA on their stories of using AI and how they feel about AI using their data and content.
The article was a fascinating readβI'm am dismayed by what seems to be a lack of utility the medical student was able to derive from AI tools. It's quite straightforward to ask ChatGPT and similar models to pull excerpts from scientific papers to support an answer or provide relevant text chunks for specific questions. While hallucinations can occur, detecting them is often a fairly mechanical process (most of the time.)
I believe it takes significant training, experience, and creative thinking to use tools like ChatGPT effectively. Those of us who can code and develop custom tools on top of large language models are at a distinct advantage. For instance, I can create processing pipelines that check for hallucinations, automate much of the prompting, and format results in a way that directly supports my research.
Thanks for sharing your experience, Arman! It's cool that you're able to make processing pipelines that make AI tools safer and more productive for you to use. I suspect that most people who want to use the tools either aren't going to be able to do that coding, or won't find it a worthwhile investment of their time. The ideal, and most efficient, way would seem to be for the LLM tool providers to automatically detect and flag (or correct?) hallucinations without requiring users to be code-proficient and do it themselves. π I'm hoping they're working on that!
Great post, Karen. It was fascinating to learn that for a med student, using AI to plow new knowledge ground is treacherousβenter at your own risk. But if one has expert knowledge, a bot can help fill in empty slots. This point squares with research that finds experts in an area use bots more productively than novices and underscores the importance of understanding more about bots and novice learners. Thank you for this.
Hi Terry, thatβs what jumped out to me as well - their savvy about the risks of AI giving false info, and how to carefully use it. And their integrity with not using AI for their original research. I was impressed.
Indeed. This post provides a powerful example of a student in the middle of an advanced professional program who has no inherent bias toward AI and approaches it with clear eyes and a critical and analytical perspective. This attitude among medical students is a model. Itβs cool that someone learning to practice human medicine grasps the significance of human doctors-in-training doing their own research.
The article was a fascinating readβI'm am dismayed by what seems to be a lack of utility the medical student was able to derive from AI tools. It's quite straightforward to ask ChatGPT and similar models to pull excerpts from scientific papers to support an answer or provide relevant text chunks for specific questions. While hallucinations can occur, detecting them is often a fairly mechanical process (most of the time.)
I believe it takes significant training, experience, and creative thinking to use tools like ChatGPT effectively. Those of us who can code and develop custom tools on top of large language models are at a distinct advantage. For instance, I can create processing pipelines that check for hallucinations, automate much of the prompting, and format results in a way that directly supports my research.
Thanks for sharing your experience, Arman! It's cool that you're able to make processing pipelines that make AI tools safer and more productive for you to use. I suspect that most people who want to use the tools either aren't going to be able to do that coding, or won't find it a worthwhile investment of their time. The ideal, and most efficient, way would seem to be for the LLM tool providers to automatically detect and flag (or correct?) hallucinations without requiring users to be code-proficient and do it themselves. π I'm hoping they're working on that!
Great post, Karen. It was fascinating to learn that for a med student, using AI to plow new knowledge ground is treacherousβenter at your own risk. But if one has expert knowledge, a bot can help fill in empty slots. This point squares with research that finds experts in an area use bots more productively than novices and underscores the importance of understanding more about bots and novice learners. Thank you for this.
Hi Terry, thatβs what jumped out to me as well - their savvy about the risks of AI giving false info, and how to carefully use it. And their integrity with not using AI for their original research. I was impressed.
Indeed. This post provides a powerful example of a student in the middle of an advanced professional program who has no inherent bias toward AI and approaches it with clear eyes and a critical and analytical perspective. This attitude among medical students is a model. Itβs cool that someone learning to practice human medicine grasps the significance of human doctors-in-training doing their own research.
Great interview series, Karen. Iβm happy to support your research!
Thank you so much for your support, Kathy!! You got the series off to a great start π
You are most welcome. Iβm happy to contribute. I love your idea to interview real people using AI and those affected by it.