8 Comments
Aug 15Liked by Karen Smiley

The article was a fascinating readβ€”I'm am dismayed by what seems to be a lack of utility the medical student was able to derive from AI tools. It's quite straightforward to ask ChatGPT and similar models to pull excerpts from scientific papers to support an answer or provide relevant text chunks for specific questions. While hallucinations can occur, detecting them is often a fairly mechanical process (most of the time.)

I believe it takes significant training, experience, and creative thinking to use tools like ChatGPT effectively. Those of us who can code and develop custom tools on top of large language models are at a distinct advantage. For instance, I can create processing pipelines that check for hallucinations, automate much of the prompting, and format results in a way that directly supports my research.

Expand full comment
author

Thanks for sharing your experience, Arman! It's cool that you're able to make processing pipelines that make AI tools safer and more productive for you to use. I suspect that most people who want to use the tools either aren't going to be able to do that coding, or won't find it a worthwhile investment of their time. The ideal, and most efficient, way would seem to be for the LLM tool providers to automatically detect and flag (or correct?) hallucinations without requiring users to be code-proficient and do it themselves. πŸ™‚ I'm hoping they're working on that!

Expand full comment

Great post, Karen. It was fascinating to learn that for a med student, using AI to plow new knowledge ground is treacherousβ€”enter at your own risk. But if one has expert knowledge, a bot can help fill in empty slots. This point squares with research that finds experts in an area use bots more productively than novices and underscores the importance of understanding more about bots and novice learners. Thank you for this.

Expand full comment
author

Hi Terry, that’s what jumped out to me as well - their savvy about the risks of AI giving false info, and how to carefully use it. And their integrity with not using AI for their original research. I was impressed.

Expand full comment

Indeed. This post provides a powerful example of a student in the middle of an advanced professional program who has no inherent bias toward AI and approaches it with clear eyes and a critical and analytical perspective. This attitude among medical students is a model. It’s cool that someone learning to practice human medicine grasps the significance of human doctors-in-training doing their own research.

Expand full comment

Great interview series, Karen. I’m happy to support your research!

Expand full comment
author

Thank you so much for your support, Kathy!! You got the series off to a great start 😊

Expand full comment

You are most welcome. I’m happy to contribute. I love your idea to interview real people using AI and those affected by it.

Expand full comment