Saturday, March 14, 2026

AI Instead of Scientists? RFK Jr.’s Health Agenda Is a Dangerous Experiment

In a wide-ranging interview with Tucker Carlson this week, presidential candidate Robert F. Kennedy Jr. outlined a sweeping vision for the U.S. Department of Health and Human Services (HHS)—one that places artificial intelligence at the center of federal health decision-making. Kennedy described what he called an “AI revolution” within HHS, proposing the use of machine learning tools to replace expert-driven processes in areas ranging from drug approvals to vaccine surveillance. His comments have prompted serious concern among scientists, medical professionals, and public health policy experts.

Throughout the 92-minute conversation, Kennedy repeatedly questioned the legitimacy of scientific consensus and urged the public to “stop trusting the experts.” Instead, he advocated for expanding the use of AI tools to analyze government health data, claiming such technologies could root out waste and improve decision-making. He cited the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA) as agencies where AI could be implemented to accelerate approvals and monitor vaccine safety—positions that closely mirror long-standing positions from anti-regulation and anti-vaccine circles.

Kennedy’s remarks have raised alarms not only for their vagueness but also for their potential to fundamentally alter the nation’s approach to biomedical oversight. At a time when trust in public health institutions is already fragile, experts say replacing scientific evaluation with AI systems—especially ones led by administrators hostile to traditional medicine—could have long-lasting consequences for public confidence, data integrity, and regulatory standards.

AI for Drug Testing and Vaccine Oversight

One of Kennedy’s core proposals involves phasing out animal testing in pharmaceutical development and replacing it with AI-based computational models. While this shift is not entirely new—Congress passed the FDA Modernization Act 2.0 in 2022 under President Biden to allow limited alternatives to animal testing—Kennedy suggested a much broader reliance on AI tools to evaluate drug safety. He implied that such technologies could fully eliminate the need for animal models, despite scientific consensus that no complete substitute currently exists.

“There is currently no full replacement for animal models in biomedical research and drug development,” the National Association for Biomedical Research stated in April. While researchers continue to explore promising alternatives like organ-on-chip systems and organoid cultures, most experts agree that AI is only capable of supplementing—not replacing—existing safety protocols. Overreliance on computational models, critics warn, could undermine patient safety and reduce the robustness of drug evaluation processes.

Kennedy also referenced ongoing efforts to integrate AI into the CDC’s Vaccine Adverse Event Reporting System (VAERS), a national surveillance tool designed to detect rare risks associated with vaccines. While he has previously supported automating parts of VAERS, experts caution that doing so without proper oversight could lead to misinterpretation of reports, which are already frequently misunderstood by the public. VAERS allows anyone to submit a report, and entries do not establish causation between vaccines and adverse events—a distinction that AI systems may not adequately account for without rigorous constraints.

Experts Warn of Misuse and Misinformation

Public health experts are particularly concerned about Kennedy’s plans for VAERS, given the system’s long history of being misused by vaccine skeptics. “There’s nothing about VAERS that allows us to determine whether a vaccine caused the reported adverse event,” said Dr. Kawsar Talaat, an infectious disease physician and vaccine safety researcher at Johns Hopkins University. Paul Offit, a virologist and vaccine expert at Children’s Hospital of Philadelphia, echoed that view, warning that data from VAERS has often been taken out of context to support false claims.

Despite its limitations, VAERS has successfully identified rare vaccine-related side effects, such as myocarditis associated with mRNA COVID-19 vaccines and blood clotting linked to the Johnson & Johnson shot. These events were detected post-authorization and validated through more robust systems like the Vaccine Safety Datalink and the Clinical Immunization Safety Assessment Project. Experts stress that VAERS functions best when complemented by human-led review—not when used in isolation, especially by automated tools that could be biased by their training data.

Kennedy’s suggestion that AI should drive this process worries researchers who fear that the system could be reconfigured to validate preconceived conclusions. Earlier this year, the top vaccine regulator at the FDA reportedly stepped down over concerns that Kennedy and his appointees would gain unfettered access to VAERS data. Critics say the danger lies not in the technology itself, but in how it might be used to amplify misleading narratives under the guise of data-driven analysis.

Political Influence, Public Health Risks

Kennedy’s emphasis on AI is consistent with broader trends in the tech sector, where artificial intelligence is increasingly being promoted as a tool for efficiency and oversight. But experts caution that these tools must be grounded in evidence and implemented by those with a firm understanding of both the technology and the science it seeks to augment. In Kennedy’s case, critics argue that his history of promoting discredited medical claims undermines confidence in his ability to manage such a transformation responsibly.

Concerns over Kennedy’s plans are amplified by recent reports that his health advisory committee—officially titled the “Make America Healthy Again Commission”—relied on an AI-generated report containing fabricated citations. The use of generative AI tools without adequate vetting has raised additional questions about the administration’s approach to accuracy, accountability, and transparency in shaping public health narratives.

As AI continues to evolve, it may indeed play a meaningful role in certain aspects of health policy and medical research. But according to a 2024 review of 120 studies, the technology remains vulnerable to bias, privacy issues, and manipulation. For now, scientists and physicians broadly agree on one point: while AI has potential, it cannot—and should not—replace expert judgment when public safety is at stake.

Kennedy’s efforts to sideline that expertise in favor of automation signal a larger shift in how government might operate under his leadership. Whether framed as innovation or disruption, the risks of relying on AI in critical areas of health policy are already clear to those watching closely.