Thursday, March 12, 2026

Research Shows Physicians Make More Cancer Misdiagnoses When Using AI Support

A study published in The Lancet has found that doctors who use artificial intelligence (AI) tools to help detect early signs of colon cancer perform worse when those tools are later removed. The research raises concerns about “de-skilling” in clinical practice, where reliance on technology may erode a professional’s ability to work independently.

Decline in Detection After AI Support Is Removed

Researchers examined performance at four endoscopy centers in Poland, comparing cancer detection rates for three months before and three months after AI was introduced. Once the technology was available, colonoscopies were randomly assigned to be performed with or without AI assistance. Doctors who worked without AI after having access to it recorded detection rates about 20 percent lower than before the technology was implemented.

The study involved 19 experienced physicians, each with more than 2,000 colonoscopies performed. Despite this level of expertise, the findings indicate that exposure to AI assistance reduced their ability to identify potential cancerous growths when they returned to unaided work. The authors warn that such effects could be even greater among less experienced clinicians.

These results suggest that while AI can boost performance when it is in use, it may have unintended consequences on long-term diagnostic skills. The challenge lies in balancing the benefits of technological support with the need to maintain independent clinical judgment.

Broader Risks of Overreliance on AI

AI has proven useful in various medical applications, from cancer screening to reviewing patient histories. By analyzing large volumes of past cases, AI systems can help identify patterns that humans might overlook. Studies in clinical environments have found that doctors using these tools often achieve higher accuracy and better patient outcomes.

However, the Polish study reflects a wider concern about reduced critical engagement when people become accustomed to AI assistance. A Microsoft study earlier this year found that knowledge workers using AI were less likely to think critically about their tasks. Similarly, research from MIT showed that students relying on AI for writing assignments engaged less deeply with the subject matter.

These findings highlight the risk that AI can encourage passivity, leading users to trust the system’s outputs without sufficient verification. In medicine, this could have significant implications for patient safety and the quality of care.

Implications for Healthcare Practice

According to the American Medical Association, around two-thirds of physicians already use AI to support their work. While the technology can improve efficiency and accuracy, experts caution that safeguards are needed to ensure that human skills remain sharp. Even advanced AI systems can make mistakes, including producing false or misleading information.

Maintaining a high standard of clinical decision-making may require ongoing training, structured evaluation, and protocols that encourage doctors to verify AI-generated results. Limiting reliance on the technology in certain contexts could also help preserve core diagnostic skills.

The Lancet study serves as a reminder that integrating AI into healthcare is not only a question of improving performance but also of managing long-term impacts on professional competence. Striking the right balance between technology and human expertise will be essential as AI becomes more embedded in medical practice.