In This Issue
HHS funds research into speedy testing for antibiotic resistance
New FDA-approved test for inherited cancer genes (plural)
Improving autism diagnosis with an app and AI
New and Noteworthy
Speedy testing for antibiotic resistance gets $104 million from HHS
HHS’s Advanced Research Projects Agency for Health (ARPA-H) has dedicated funds for a deep dive into identifying bacteria and testing them for antimicrobial resistance - in minutes rather than the days required by traditional microbiology. This will add to the partial solutions in the clinic today that distinguish bacterial infections from viral ones, identify specific bacteria families, and/or test for resistance.
The lead investigator, Johan Paulsson (Harvard Medical School), will coordinate a multi-institution team. The aim: to combine single-cell genomics and microscopy into an AI-embedded, microfluidics diagnostic test (for more, see papers from 2020 and 2021) while also exploring how to accelerate the pace of discovery of novel antibiotics. Paulsson personally experienced the trial-and-error inadequacy of current clinical techniques when he contracted sepsis himself.
Commentary: We cannot help noting that today, clinician resistance to available tests compounds antibiotic resistance, especially for the most critical cases.
Test for range of inheritable genes that increase cancer risk gets de novo approval
Five to 10% of cancer cases are considered hereditary, meaning a mutated gene was passed down from parent to child. But just because a close relative has a hereditary form of cancer does not mean that everyone in the family has the gene associated with it.
The FDA recently approved a first-of-its-kind test that looks for these heritable genes - and not just for one gene or one type of cancer, but for 47 genes associated with cancers in nine different organs. The test is not intended to be a wide screening tool but a targeted test for those who meet the criteria for screening due to their family history of cancer.
Commentary: We can’t predict the use of this test, but we note a recent Wall Street Journal article on the topic entitled “Cancer Runs in Families. Too Few Are Getting Tested.”
Adding an app to autism screening could improve diagnosis
When you take your toddler for a well-visit with their doctor, you’ll probably have to fill out a questionnaire that serves as a screening test for autism. As screening tests go, the questionnaire (the Modified Checklist for Autism in Toddlers-Revised with Follow-Up) is, shall we say, sub-optimal: In a large study, only 14.6% of the kids it flagged were actually autistic. The questionnaire is also known to work less effectively for girls and children of color under real-world conditions (as opposed to its performance in the lab).
But things might be about to get a bit better. A recent article in Nature Medicine describes a tablet-based app called SenseToKnow, which plays movies “designed to elicit a wide range of autism-related behaviors, including social attention, facial expressions, head movements, response to name, blink rate, and motor behaviors.” The tablet’s camera records the kiddo’s responses to the movie, quantifies them, and - because 2023 - uses machine learning to tell you how well the test was administered and how confident the app is that the kiddo has autism.
In a set of 475 toddler well visits, the app did much better than the questionnaire: 40.6% of the kids it flagged as potentially autistic were ultimately diagnosed with the condition. The combination of app and questionnaire was even more effective, correctly flagging 63.4% of autistic kids.
Commentary: Part of the problem with the questionnaire on its own is that in the real world, clinicians may not complete the follow-up interview that’s supposed to go with it. We can’t blame them - well visits are packed as it is. We hope that this app or its descendant(s) helps clinicians find more of these kids without adding to their burden in the exam room.
Food for Thought
AI in imaging is only as good as the humans who create it: Two cautionary tales
AI’s value to medicine is a hot topic these days, but what it can do is defined and limited by how it was created and trained. That includes all the biases that may have been present - in both the people who set the AI’s parameters and in the data set that was used to train the algorithm. Two recent papers highlight these issues.
#1: Where do you draw the line?
One paper in Radiology pitted four commercial AI systems against human radiologists in the evaluation of chest X-rays. All four systems had quite similar, middling performance: 80% average sensitivity (i.e., 20% of patients with disease were missed) and 59% positive predictive value (i.e., 41% of patients labeled as diseased were in fact normal). The study concluded that these AI systems were far inferior to human judgment.
The issue in this case is the parameters set by the systems’ designers. It is understandable that AI system designers would not want to miss life-threatening conditions. The consequence, however, is a higher false-positive rate, as in these four systems. False positives are not without harm - they generate anxiety, not to mention more invasive and expensive follow-up procedures.
Commentary 1: Eighty percent would be a fine result for generating “You may also like to watch” recommendations from Netflix, but it is not good enough for clinical diagnostics. The best use for these systems would be to reduce false negatives by suggesting suspicious features for human review. (That still leaves a risk of over-reliance on the part of those humans: All too often, we learn to defer to a system that is mostly right.)
#2: Who defines “normal” and “abnormal”?
Another paper, this one in Radiology: Artificial Intelligence, examined race and gender bias in a chest X-ray foundation model (a general-purpose model intended to be used as the basis for more specific applications). All things being equal, we might expect image interpretation to be more scientific/objective across race and gender. Apparently not: Women received an inaccurate “no finding” diagnosis (i.e., false negative) from 7% to 16% more frequently than men, while Black patients received an inaccurate diagnosis of pleural effusion 20% more often than Asian patients.
Commentary: Here the issue was the data set on which the model was trained. While it was large (800,000-plus radiographs) roughly 80% of the images were sourced from India; the paper doesn’t specify the percentage of those images that came from male patients.
Commentary: Curated, relevant, balanced, labeled training data is critical in medical AI. Just increasing the quantity of (biased) training data does not help.
Quick Hits
Cytomegalovirus (CMV) is a leading cause of birth defects. It is common and asymptomatic in adults but dangerous for pregnant women, as it can be passed to their babies with serious consequences. New York recently joined a growing number of states to pilot universal CMV screening of newborns hours after birth.
The FDA is looking to create a nine-member external committee to advise the agency on the use / misuse / challenges / opportunities associated with clinical use of emerging digital technologies. The remit appears to be broad, including digital therapeutics, AI, and remote patient monitoring devices. Applications will be accepted through December 11, 2023.
Following up on our Special Report: It comes as no surprise that many are predicting lawsuits opposing the FDA’s proposed rule concerning laboratory-developed tests. The American Clinical Lab Association has already issued a statement in opposition.
We are taking next week off as Mara will be leading Arizona State University's Annual Diagnostics Summit on October 19th. Please consider attending this hybrid event - registration is free!
A strong agenda with Dr. Tim Stenzel - FDA Diagnostics Chief - as Keynote Speaker, along with three provocative panel discussions on all things diagnostics. Closing Speaker is Mark Massaro, Diagnostics Analyst at BTIG. We hope to see you there. Any questions: mara.aspinall@asu.edu
We will be back with the next issue of our newsletter on Thursday, October 26th.