Photo atlas, quizzes, videos, VR, practice questions, radiology, and anatomy tables (University of Michigan Medical School).
Medical imaging is the economic artery of modern diagnostics, with AI-driven advancements prompting clinicians to rethink how they interpret scans. For instance, the global medical imaging AI market1 is projected to expand eightfold by 2030, reaching an estimated $8.18 billion.
Ravinder Singh.
It signifies the sector’s accelerating adoption of AI-driven diagnostics, shifting what was once a supplementary tool into a linchpin of modern radiology. Yet, the fidelity checks between the original images used for AI algorithm development and the rendered images used for diagnosis purposes are often overlooked.
The methodologies governing image fidelity assessment remain mired in conventional techniques. Metrics like mean squared error (MSE) and peak signal-to-noise ratio (PSNR), long considered gold standards, are now facing scrutiny for their potential to capture the true essence of diagnostic quality.
Under scrutiny
As medical imaging platforms evolve, even subtle changes in how images are rendered on new viewers can alter how they appear to radiologists, despite the underlying data remaining unchanged. These variations, while often imperceptible to the untrained eye, can influence clinical interpretation.
The difference between the rendered image which is used for diagnosis and the original image which is used for AI algorithm training can impact performance of the model developed. Ensuring that the fidelity of the newly rendered image matches the original diagnostic intent is becoming a critical priority.
Vaibhavi Sonetha, PhD.
Furthermore, image fidelity between original and rendered images has traditionally been evaluated using pixel-based metrics like MSE and PSNR, which prioritize mathematical precision and often fall short of capturing perceptual or diagnostic relevance. However, more than three-quarters of AI-based medical devices authorized by the U.S. Food and Drug Administration (FDA) are dedicated to analyzing CT scans, MRIs, and x-rays. This compels a deeper examination: Do conventional fidelity assessments align with real-world clinical decision-making?
The industry is already signaling a shift. A recent sentiment analysis2 revealed that 55% of discussions around AI in medical imaging expressed positive views, highlighting its potential to enhance diagnostic accuracy and efficiency. This optimism reflects a growing reliance on AI-driven insights. Advanced image fidelity evaluation methods are proving essential in tackling fidelity challenges and ensuring consistency in how diagnostic images are interpreted across systems and software platforms.
Illusion of perfection
Conventional Full-Reference IQA (FR-IQA) models, especially those based on pixel-wise comparisons, are still widely used in medical imaging to evaluate rendering pipelines, acquisition methods, and image transformations. However, by treating image quality as a purely numerical problem, these models fail to account for the perceptual and diagnostic complexities of medical imaging. This disconnect introduces critical blind spots in AI-assisted diagnosis.
So, there is a clear need for advanced techniques that tackle these challenges and set new standards for medical image fidelity.
Pixels to perception
Radiologists rely on context, medical expertise, and years of pattern recognition experience to identify structural distortions, contrast variations, and textural inconsistencies that could indicate critical conditions. Inspired by this, the industry is now embracing perception-based IQA frameworks, designed to emulate human vision.
These advanced IQA methods use a computerized human vision model that replicates retinal optics, spatial contrast sensitivity, and visual cortex frequency processing. It generates a spatial map of visual differences between two images, identifying distortions that affect clinical interpretation. These models include the perceptual difference model (PDM), structural similarity index (SSIM) family, learned perceptual image patch similarity (LPIPS), and deep image structure and texture similarity (DISTS), among others.
Unlike pixel-based metrics, perception-driven IQA prioritizes:
Key drivers
Deep learning has transformed medical imaging by making image fidelity assessments more clinically meaningful and adaptable to AI-driven workflows. From feature maps to similarity indexes, the technology has been setting the bar higher with models such as:
Each metric has distinct advantages and limitations, and its suitability is determined by factors such as the specific application, nature of distortions, computational demands, and perceptual accuracy requirements.
Why perception-driven IQA outperforms
For AI-assisted diagnostics to be reliable and effective, image fidelity must maintain anatomical accuracy and diagnostic clarity. Perception-based IQA delivers this reliability and effectiveness by mirroring how radiologists interpret visual data, ensuring that medical images reflect true-to-life anatomical structures rather than just mathematical approximations.
Simultaneously, it aligns with how AI systems analyze imaging inputs, refining their ability to detect clinically significant patterns and deviations. This leads to greater diagnostic accuracy, workflow efficiency, and cross-modality consistency. Advancements in perception-driven IQA redefine medical imaging fidelity with advantages such as:
Next-gen image fidelity
As the healthcare industry undergoes rapid digital transformation, medical image fidelity standards must evolve to support AI-driven innovation and clinical accuracy alike.
The next generation of medical imaging will not be defined by what’s visible but by what’s actionable. As AI systems become more autonomous and decision-support tools grow in sophistication, the demand will shift from mere image clarity to diagnostic intent recognition. Perception-based IQA plays a critical role here by quantifying fidelity in ways that reflect both how radiologists see and how algorithms reason.
It enables consistency in image interpretation across diverse systems, user environments, and AI models, helping standardize diagnostic outcomes even when different tools are in play. By embedding perception-aligned fidelity checks into the imaging workflow, healthcare providers can confidently scale AI adoption without compromising diagnostic integrity. Organizations that prioritize perception-based IQA today will improve diagnostic precision and lead the way in defining standards for AI-integrated healthcare.
Ravinder Singh is senior vice president of Citius Healthcare Consulting and Vaibhavi Sonetha, PhD, is assistant vice president.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnie.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.
References
[1] https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-medical-imaging-market
Until recently I had no real idea what a diphthong was. I had the vague notion it was some form of sprocket or perhaps part of an internal combustion engine. You see, Dear Reader, I didn’t have a classical education. When I was at school, "Grammar" was what we called my mother’s mother. I’m therefore a self-taught writer, almost entirely lacking any technical knowledge of English literature. Writing in the vernacular is all I have. But it worked for Hemingway and Vonnegut, so it is good enough for me.
Anyway, my knowledge of grammatical arcana got a upgrade recently. When looking up the correct medical spelling of "fetus/foetus," I spent a happy hour or two down a rather deep internet rabbit hole, bouncing around related topics, completely divorced from reality. As it turns out, there is an accepted spelling of the term in British medical practice. And that is "fetus."
As I was brought up spelling it "foetus," this was a minor shock. Unbeknownst to this adult radiologist, it has officially been "fetus" for over a decade not just on this side of the Atlantic but globally so. Our American cousins may celebrate this as a win, thinking the Brits have come to their senses and started using the simpler American spelling. But it isn’t quite the win they might think it is.
It is simply that fetus is closest to the original Latin word fētus (meaning breeding or birth). But why did foetus ever arise in the first place? Therein lies the entrance to the aforementioned rabbit hole. You see, during the 16th Century, a whole language of English medical words derived from Latin and Greek came into use to describe new medical discoveries. It wasn’t an intellectual flex, just that they were the scholastic languages of the time. As a result, a large number of words were introduced that were supposed to retain some of the grammatical features of its source.
Hypercorrections
Except translators often hypercorrected matters, introducing prestigeful spelling based on etymological fallacies. Which led to multiple different spellings of fetus, foetus, phoetus and fætus before finally agreeing on foetus. Except they settled on the wrong spelling. The original Latin fētus was pronounced with a long "e," denoted by the little line above the letter (called a "macron"). So fetus should really have been spelled "feetus" if we are going to utterly logical about it. But it is a bit late for that now.
The "oe" bit in the middle of foetus is supposed to be, I learned, a diphthong. This is where two letters create a syllable that glides across the mouth. The exclamation “Ah!” is a monophthong, whereas “Ow!” is a diphthong. Perhaps the original thinking was, presumably, that "foetus" should have been pronounced "foe-ee-tus." However, it was always pronounced "fee-tus," so the diphthong argument for the "oe" in foetus doesn’t stand up.
More: the "oe" bit is also a digraph. This is where two letter combine to form a sound, potentially unrelated to the spelling. The digraph "oe" usually denotes a long flat "o" such as in "toe" or "poet." Fetus was never pronounced "fow-tus," so justifying foetus as a digraph doesn’t hold water either. The same is true with the words "fetor" and "fetid," from the Latin fētor, meaning "stinking." We Brits should have never used "foetor" or "foetid," and it has been largely dropped. Feetor, anyone?
I began to wonder about all the other different spellings between British and American medical English. Were haematology/hematology, hydrocoele/hydrocele, and tumour/tumor all originally misspelled by scientists of the Enlightenment? Well, no, as it turned out. It is more complex than just dropping redundant diphthongs and ignoring etymology.
Many American English medical spellings are along the principles set forth by U.S. lexicographer Noah Webster. Webster’s 1828 American Dictionary of the English Language was trying not just to simplify spelling but also to unify spelling across the then fledgling U.S. However, many modern medical words (like "paediatrics/pediatrics") were coined well after Webster’s time. We Brits can’t point the finger directly at the Webster on this one. Nor can we blame the Merriam brothers who bought the rights to Webster’s work after his death.
I’ve read many arguments that British medical spelling is more accurate because it reflects the etymology of the word. Hence "oesophagus" should retain it’s initial "o" because the original Greek word was οἰσοφάγος or oisophagos. But we don’t call Egypt "Aegypt" just because the Greeks and Romans spelled it with an "A." I’ve also read that we should retain the digraphs "oe" and "ae" as they are pronounced subtly differently to "e." Well, that might once have been true but not now.
Does it matter?
Does it really matter about there being one ultimately correct medical spelling of the word denoting the unborn child or gullet? Not really. I feel lucky that English is the lingua franca of medicine. As a Brit, I can go to international conferences and everything is in my birth tongue. But as a global language, English is ever evolving and will keep evolving. Local variations of English exist in multiple dialects across the world. But the spoken word and the written word evolve at different speeds. The written word lags behind by some distance. Several hundred years in many cases.
Aside from my new love of all thing fetus, I’m largely averse to changing the spelling of British English medical terms. We’ve never had the equivalent of Académie Française, a 400-year old French institution solely dedicated to regulating French grammar, spelling and literature. English has a 1,600-year history from its roots as a West German language from Anglo-Saxon invaders. It then absorbed many words from others, largely Norse and French invaders. This mish-mash of odd words give a richness and depth but its loose grammar structure allows flexibility and ease of use. It’s too late to change spelling of words wholesale. Moreover, we don’t want to. We like it as it is -- albeit a tad messy.
So what if we now pronounce many words quite differently from their spelling? So what if they are hard to spell and confusing for nonnative speakers? All languages have oddities that way. But if my American cousins want to spell things differently, you guys go for it. Knock yourselves out. Whatever works for you. Just don’t expect us to change. Or agree that one way is somehow "better."
Because if we start changing the spelling of British English words to match modern global pronunciation, we’d be absolutely screwed. For example, the sentence “Worcester knigknight Colonel Geoff sliced the tough sugar cake using a sword” makes complete sense to Brits. But if you change it to "Wuster nite Kernel Jeff slysed thu tuff shugar cayk yoosing ay sord" it makes phonetically sense but it becomes absolute gobbledygook.
My overall thoughts? Ignore the grammar pedants, ignore the nationalists, ignore international standardization committees. Minor spelling and grammatical differences cause no harm. I say leave things be and just celebrate our differences. Let language evolve naturally. Don’t fight with dialects or correctness of spelling. It comes across as sneering cultural snobbery. No snobbery is good but that is definitely the worst sort.
Fred Astaire and Ginger Rogers had it right in 1937 when they sang “potato, potahto; tomato, tomahto, let’s call the whole thing off!” We need each other too much to squabble over words; we can be happy and work together productively, spelling notwithstanding.
Paul McCoubrie, MBBS, is a consultant radiologist at Southmead Hospital in Bristol, U.K. Competing interests: None declared.
His new book -- More Rules of Radiology -- is available via its publisher Springer and also local bookstores ( ISBN-13 978-3031640933).
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnie.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.
Columbia University researchers in New York City have reported that worsening depression in older adults is related to higher levels of Alzheimer’s disease pathology, specifically the accumulation of tau protein.
The finding is from an analysis of 300 participants who underwent multiple F-18 flortaucipir tau PET scans between 2015 and 2022 and suggests depression could be a modifiable risk factor for the disease, noted lead author Daniel Talmasov, MD, and colleagues.
“Depressive symptoms may have particular relevance as an indicator of Alzheimer's pathology progression and could represent a target for research into modifiable risk factors for Alzheimer's disease,” the group wrote. The study was published April 9 in the American Journal of Geriatric Psychiatry.
Despite evidence suggesting a link between depression and faster clinical progression in patients with Alzheimer’s disease, the biological mechanisms underlying this association are poorly understood, the authors explained.
To address the knowledge gap, the group explored whether there is a longitudinal relationship between depressive symptoms and increased neurofibrillary tau tangles in the brain, a hallmark of the disease.
The researchers gathered data from 300 patients (median age, 74) from the Alzheimer's Disease Neuroimaging Initiative, a project launched at the University of California, San Francisco in 2003 to study the progression of the disease. All patients had at least two F-18 flortaucipir PET scans to quantify tau levels between July 2015 and April 2022, as well as several Geriatric Depression Scale (GDS) scores over the period.
Clinically, 162 individuals were cognitively unimpaired, 99 had mild cognitive impairment, and 42 had dementia due to Alzheimer's disease. The majority of participants (n = 260) had minimal depressive symptoms, while 37 participants showed mild depressive symptoms (GDS 5-8) and six had moderate symptoms (GDS 9-11).
According to the analysis, GDS scores were positively correlated with annualized tau accumulation rates. Specifically, a 5-point increase in GDS score (approximately the difference between minimal and mild depression) corresponded to a 1.2 percentage point difference in annual tau accumulation, the group reported.
“These results support the hypothesis that the burden of depressive symptoms is associated with the rate at which tau accumulates,” the group wrote.
Ultimately, the study sample was limited to the existing ADNI cohort as of 2023, in which most participants were white, non-Hispanic, and college-educated, and thus the findings should be replicated in a more diverse cohort, the authors noted.
Moreover, the retrospective design of the study prevents inferring a causal or directional relationship, the researchers added.
“Future studies could ultimately explore whether managing depression mitigates [Alzheimer’s disease] progression by influencing the trajectory of tau accumulation,” the group concluded.
The full study can be found here.
Online support: librarian@smithchason.edu
OT onsite support
Aaron, Library Coordinator
LA onsite support
Carla, Regional Librarian
Jean, Library Coordinator
PH onsite support
Receptionists
SM onsite support
Danessa, Library Coordinator
Dr. Behrouz Rahimpour - Los Angeles
Michelle Foreso - Ontario
Douglas Boyd - Phoenix