- The Supper
- Posts
- Patients Are Turning to AI to Interpret Medical Test Results
Patients Are Turning to AI to Interpret Medical Test Results
Tech Still Has Limits

With the rise of online patient portals and immediate access to electronic health records, individuals are increasingly turning to generative AI tools to help make sense of their medical data. Technologies such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini are being used to analyze lab results, clinical notes, and test summaries—often within minutes of release.
Keep This Stock Ticker on Your Watchlist
They’re a private company, but Pacaso just reserved the Nasdaq ticker “$PCSO.”
No surprise the same firms that backed Uber, eBay, and Venmo already invested in Pacaso. What is unique is Pacaso is giving the same opportunity to everyday investors. And 10,000+ people have already joined them.
Created a former Zillow exec who sold his first venture for $120M, Pacaso brings co-ownership to the $1.3T vacation home industry.
They’ve generated $1B+ worth of luxury home transactions across 2,000+ owners. That’s good for more than $110M in gross profit since inception, including 41% YoY growth last year alone.
And you can join them today for just $2.90/share. But don’t wait too long. Invest in Pacaso before the opportunity ends September 18.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.
This trend is fueled by regulatory changes that mandate health organizations to release electronic medical information without delay. As a result, patients can now view test results and physician notes in real time, often before a provider has reviewed or commented on them. A 2023 study found that 96% of patients surveyed want this kind of immediate access.
To bridge the knowledge gap while waiting for professional interpretation, patients are increasingly uploading their data to large language models (LLMs). These models can generate simplified explanations, offer contextual information, and even help form questions for follow-up with healthcare providers. Unlike static health websites or forums, AI chatbots provide interactive, tailored responses.
However, experts caution that the accuracy and safety of these tools vary widely. While LLMs are capable of offering insightful analysis, they are also prone to errors—or "hallucinations"—where incorrect information is presented with apparent confidence. The reliability of responses often depends on how questions are phrased and whether users prompt the model to take on a specific role, such as a medical professional.
A 2024 KFF poll found that 56% of adults interacting with AI tools in healthcare lack confidence in the accuracy of the information they receive. Furthermore, concerns about data privacy remain significant. Most consumer AI tools are not governed by healthcare privacy laws such as HIPAA, and any personal data entered may be retained or processed by the companies operating the models.
Despite the limitations, AI use in patient self-advocacy is expanding. Studies have shown that LLMs can effectively clarify radiology reports and clinical summaries, although results are mixed. In one analysis of ChatGPT-generated summaries, most users reported improved understanding, but some experienced confusion due to misinterpretation or uneven emphasis on findings.
Some health systems are also experimenting with AI internally. For instance, AI assistants are being deployed to help clinicians draft patient-facing summaries of test results. This dual use—by patients and providers—signals a shift toward integrating AI as a layer between raw medical data and human decision-making.
Proper use of these tools requires a new form of digital health literacy. Experts recommend verifying AI responses with other trusted sources, consulting healthcare providers before acting on AI-generated insights, and omitting identifiable information from AI prompts to protect privacy.
While generative AI cannot replace professional medical advice, it is becoming a useful supplement. When used carefully, it can help users understand test results faster, prepare for appointments more effectively, and participate more confidently in their healthcare decisions.
Reply