ALSO IN THIS ISSUE
Tests for NCI pilot MCED study announced
Could an app Dx early-stage dementia?
Bird Flu Update - Pay attention to the cats
NCI announces tests to be evaluated in MCED pilot study
Multi-cancer early detection (MCED) blood tests aim to detect a broad range of cancers before they cause symptoms. But as we reported this past July, we don’t yet know whether using these tests is really a good idea. Will they detect enough truly dangerous cancers to outweigh the inevitable false positives? Will they save lives?
To help answer those questions, the UK’s NHS is conducting a large trial to evaluate one MCED (Grail’s Galleri). In the US, the NIH’s National Cancer Institute (NCI) is heading in a different direction. It’s doing a pretty big study of a couple of MCEDs now to get ready for a really big study of MCEDs later. Run by the Cancer Screening Research Network, this pilot study, called Vanguard, is set to start next month.
Last week, the NCI announced the two tests Vanguard will look at: Avantect, by ClearNote Health, and Shield, by Guardant Health. The tests were selected based on the results they returned on specimens that NCI provided.
COMMENTARY: It’s notable that both tests include epigenomics in their test algorithms - in other words, they don’t just look for genetic mutations, but for which genes are being turned on and off. That addition should make these tests more accurate, because it is epigenomics that indicates which genes are actually driving tumor growth, either by up-regulating cancer-causing genes, or down-regulating cancer suppressing genes.
While it will be interesting to see Vanguard’s results, we have to remember that it’s just a pilot. Its stated goal is to “inform the design of a much larger study” that will “evaluate whether the benefits of using MC[E]D tests to screen for cancer outweigh the harms, and whether they can detect cancer early in a way that reduces deaths.” Seems like we will have to wait a while to get the answers we’re really looking for.
Two ways AI makes mammograms even more useful
Mammograms are a good screening tool - a fact backed up by research as well as conventional wisdom. But two recent studies show how the use of AI could make them even better.
Compare past and present scans to predict future risk
One of the models isn’t used to diagnose cancer, but to tell whether a given patient is at high risk of developing cancer over the next five years. Traditionally, that risk level is assigned based on a person’s answers on a questionnaire that asks about age, race, family history, and more. The model uses the patient’s past mammograms instead, looking at how the images change over the course of three years.
Of those the model labeled high-risk, 53 out of every 1K developed breast cancer during the next five years. That may not sound great, but it’s a big improvement over the standard technique. Of those the questionnaire labeled high-risk, only 23 out of every 1K developed breast cancer during that time frame. The model also did well on the other end of the spectrum, with only 2.6 per 1K of the people it labeled low-risk developing cancer.
The model worked just as well on Black people as it did on white folks. Researchers are now testing the algorithm to make sure it works in people of other racial and ethnic backgrounds, too.
Add AI to radiologist review for even more accurate diagnosis
The other study looked at breast-cancer diagnosis. The current standard of care is to have two different radiologists look at a mammogram. Researchers wanted to know whether using two radiologists plus an AI model would catch more cancers - without causing more false positives.
The answer was yes. With AI assistance, radiologists caught roughly 18% more cancers. And the radiologist-plus-AI team flagged mammograms as abnormal at essentially the same rate as radiologists alone. (Translation: The addition of AI didn’t lead to more unnecessary follow-up exams.)
COMMENTARY: We’re pleased to see both of these improvements. Using AI to look for high-risk people saves money by reducing biopsies, and reducing the false-positive rate for any screening program can only be a good thing.
Bird Flu Update
CDC issues Health Advisory to speed discovery of human H5N1 cases
Another California child infected, source unknown
Cats could be H5N1 mixing vessels, and more are getting infected
Infected raw milk makes monkeys sick if it gets in their airways
The CDC has issued a Health Advisory recommending that hospitals subtype all influenza A-positive specimens from their patients as quickly as possible. It also reminded clinicians to test for influenza in patients with flu symptoms.
A California child who had a fever and conjunctivitis was diagnosed with H5N1 flu last week. The source is unknown and the kiddo has since recovered. No other people were affected.
New research shows that cats have receptors that allow them to get infected with influenza viruses from both birds and humans. (Pigs have the same ability.) The discovery is concerning because if a cat were to be infected with both H5N1 and human flu, a new version of H5N1 that spreads more easily among people could be created. This news comes just as the USDA reports that 11 more domestic cats have been infected with H5N1, several after drinking raw milk or eating commercially available raw diets.
An unedited H5N1 study in monkeys showed that drinking infected raw milk didn’t make the monkeys sick, but it still infected them and made them shed virus. If the monkeys got raw milk in their noses they got mildly sick; if they breathed the milk into their lower airways, they got severely ill.
An early-stage app to diagnose early-stage dementia
A recent Nature Medicine article that predicted a wave of dementia patients over the next few decades has gotten a lot of press this week. So how are we going to diagnose these people early enough that we can get them effectively treated? Especially the ones who don’t have easy access to a clinician who can diagnose them?
A study in PLOS Digital Health offers one possibility, using a handy tool we all carry these days - a cell phone. It looked at people with subjective cognitive decline (SCD; people who feel like they’re having some memory loss or abnormal confusion) who scored normally on tests used to diagnose dementia. The question the researchers asked: Could a wayfinding app help diagnose these folks? After all, trouble with spatial navigation is a known symptom of Alzheimer’s disease.
Study subjects were asked to find five buildings on a medical campus, using a smartphone app that tracked their movements and how they used the software. In the end, there was only one significant difference between subjects with SCD and either younger, healthy adults or older adults without SCD. The folks with SCD had more “orientation stops,” meaning they had to stop moving and check the app more frequently.
COMMENTARY: The study is quite small (only 72 subjects), but the concept is intriguing. Neurologists probably won’t like to hear this, but this app feels like the beginning of a new kind of at-home diagnostic. We’ll stay tuned.
Addendum
Last week we discussed a report presenting interim findings of the GUARDIAN trial (Genomic Uniform-screening Against Rare Diseases in All Newborns). The trial is a project of the New York State Newborn Screening program, looking at newborn whole-genome screening.
The interim results of the trial seemed to indicate that newborn whole-genome screening would result in a high rate of false positives (19%). This number was largely based in part on the cystic fibrosis diagnoses made during the course of the trial. Our article stated that all 11 of these diagnoses turned out to be false.
One of the study’s authors drew our attention to the fact that those 11 diagnoses were based on the presence of two genetic variants that are not well correlated with cystic fibrosis disease. (In other words, just because a baby has those variants doesn’t mean they’ll get sick.) If those 11 diagnoses are excluded - which they were after the first 1,000 subjects in the study - then the study’s false-positive rate decreases dramatically. Instead of a 19% false-positive rate for whole-genome newborn screening, the rate would be 9%.