Discussion about this post

User's avatar
Arman Anwar's avatar

The article was a fascinating read—I'm am dismayed by what seems to be a lack of utility the medical student was able to derive from AI tools. It's quite straightforward to ask ChatGPT and similar models to pull excerpts from scientific papers to support an answer or provide relevant text chunks for specific questions. While hallucinations can occur, detecting them is often a fairly mechanical process (most of the time.)

I believe it takes significant training, experience, and creative thinking to use tools like ChatGPT effectively. Those of us who can code and develop custom tools on top of large language models are at a distinct advantage. For instance, I can create processing pipelines that check for hallucinations, automate much of the prompting, and format results in a way that directly supports my research.

Expand full comment
Terry underwood's avatar

Great post, Karen. It was fascinating to learn that for a med student, using AI to plow new knowledge ground is treacherous—enter at your own risk. But if one has expert knowledge, a bot can help fill in empty slots. This point squares with research that finds experts in an area use bots more productively than novices and underscores the importance of understanding more about bots and novice learners. Thank you for this.

Expand full comment
6 more comments...

No posts