Sort by:
Page 19 of 6046038 results

Chen Qian, Haoyu Zhang, Junnan Ma, Liuhong Zhu, Qingrui Cai, Yu Wang, Ruibo Song, Lv Li, Lin Mei, Xianwang Jiang, Qin Xu, Boyu Jiang, Ran Tao, Chunmiao Chen, Shufang Chen, Dongyun Liang, Qiu Guo, Jianzhong Lin, Taishan Kang, Mengtian Lu, Liyuan Fu, Ruibin Huang, Huijuan Wan, Xu Huang, Jianhua Wang, Di Guo, Hai Zhong, Jianjun Zhou, Xiaobo Qu

arxiv logopreprintOct 17 2025
Clinical adoption of multi-shot diffusion-weighted magnetic resonance imaging (multi-shot DWI) for body-wide tumor diagnostics is limited by severe motion-induced phase artifacts from respiration, peristalsis, and so on, compounded by multi-organ, multi-slice, multi-direction and multi-b-value complexities. Here, we introduce a reconstruction framework, LoSP-Prompt, that overcomes these challenges through physics-informed modeling and synthetic-data-driven prompt learning. We model inter-shot phase variations as a high-order Locally Smooth Phase (LoSP), integrated into a low-rank Hankel matrix reconstruction. Crucially, the algorithm's rank parameter is automatically set via prompt learning trained exclusively on synthetic abdominal DWI data emulating physiological motion. Validated across 10,000+ clinical images (43 subjects, 4 scanner models, 5 centers), LoSP-Prompt: (1) Achieved twice the spatial resolution of clinical single-shot DWI, enhancing liver lesion conspicuity; (2) Generalized to seven diverse anatomical regions (liver, kidney, sacroiliac, pelvis, knee, spinal cord, brain) with a single model; (3) Outperformed state-of-the-art methods in image quality, artifact suppression, and noise reduction (11 radiologists' evaluations on a 5-point scale, $p<0.05$), achieving 4-5 points (excellent) on kidney DWI, 4 points (good to excellent) on liver, sacroiliac and spinal cord DWI, and 3-4 points (good) on knee and tumor brain. The approach eliminates navigator signals and realistic data supervision, providing an interpretable, robust solution for high-resolution multi-organ multi-shot DWI. Its scanner-agnostic performance signifies transformative potential for precision oncology.

Lavanya Umapathy, Patricia M Johnson, Tarun Dutt, Angela Tong, Madhur Nayan, Hersh Chandarana, Daniel K Sodickson

arxiv logopreprintOct 17 2025
Temporal context in medicine is valuable in assessing key changes in patient health over time. We developed a machine learning framework to integrate diverse context from prior visits to improve health monitoring, especially when prior visits are limited and their frequency is variable. Our model first estimates initial risk of disease using medical data from the most recent patient visit, then refines this assessment using information digested from previously collected imaging and/or clinical biomarkers. We applied our framework to prostate cancer (PCa) risk prediction using data from a large population (28,342 patients, 39,013 magnetic resonance imaging scans, 68,931 blood tests) collected over nearly a decade. For predictions of the risk of clinically significant PCa at the time of the visit, integrating prior context directly converted false positives to true negatives, increasing overall specificity while preserving high sensitivity. False positive rates were reduced progressively from 51% to 33% when integrating information from up to three prior imaging examinations, as compared to using data from a single visit, and were further reduced to 24% when also including additional context from prior clinical data. For predicting the risk of PCa within five years of the visit, incorporating prior context reduced false positive rates still further (64% to 9%). Our findings show that information collected over time provides relevant context to enhance the specificity of medical risk prediction. For a wide range of progressive conditions, sufficient reduction of false positive rates using context could offer a pathway to expand longitudinal health monitoring programs to large populations with comparatively low baseline risk of disease, leading to earlier detection and improved health outcomes.

Feifei Zhang, Zhenhong Jia, Sensen Song, Fei Shi, Dayong Ren

arxiv logopreprintOct 17 2025
Despite the remarkable success of the end-to-end paradigm in deep learning, it often suffers from slow convergence and heavy reliance on large-scale datasets, which fundamentally limits its efficiency and applicability in data-scarce domains such as medical imaging. In this work, we introduce the Predictive-Corrective (PC) paradigm, a framework that decouples the modeling task to fundamentally accelerate learning. Building upon this paradigm, we propose a novel network, termed PCMambaNet. PCMambaNet is composed of two synergistic modules. First, the Predictive Prior Module (PPM) generates a coarse approximation at low computational cost, thereby anchoring the search space. Specifically, the PPM leverages anatomical knowledge-bilateral symmetry-to predict a 'focus map' of diagnostically relevant asymmetric regions. Next, the Corrective Residual Network (CRN) learns to model the residual error, focusing the network's full capacity on refining these challenging regions and delineating precise pathological boundaries. Extensive experiments on high-resolution brain MRI segmentation demonstrate that PCMambaNet achieves state-of-the-art accuracy while converging within only 1-5 epochs-a performance unattainable by conventional end-to-end models. This dramatic acceleration highlights that by explicitly incorporating domain knowledge to simplify the learning objective, PCMambaNet effectively mitigates data inefficiency and overfitting.

Daniela Vega, Hannah V. Ceballos, Javier S. Vera, Santiago Rodriguez, Alejandra Perez, Angela Castillo, Maria Escobar, Dario Londoño, Luis A. Sarmiento, Camila I. Castro, Nadiezhda Rodriguez, Juan C. Briceño, Pablo Arbeláez

arxiv logopreprintOct 17 2025
Prenatal diagnosis of Congenital Heart Diseases (CHDs) holds great potential for Artificial Intelligence (AI)-driven solutions. However, collecting high-quality diagnostic data remains difficult due to the rarity of these conditions, resulting in imbalanced and low-quality datasets that hinder model performance. Moreover, no public efforts have been made to integrate multiple sources of information, such as imaging and clinical data, further limiting the ability of AI models to support and enhance clinical decision-making. To overcome these challenges, we introduce the Congenital Anomaly Recognition with Diagnostic Images and Unified Medical records (CARDIUM) dataset, the first publicly available multimodal dataset consolidating fetal ultrasound and echocardiographic images along with maternal clinical records for prenatal CHD detection. Furthermore, we propose a robust multimodal transformer architecture that incorporates a cross-attention mechanism to fuse feature representations from image and tabular data, improving CHD detection by 11% and 50% over image and tabular single-modality approaches, respectively, and achieving an F1 score of 79.8 $\pm$ 4.8% in the CARDIUM dataset. We will publicly release our dataset and code to encourage further research on this unexplored field. Our dataset and code are available at https://github.com/BCVUniandes/Cardium, and at the project website https://bcv-uniandes.github.io/CardiumPage/

Mahta Khoobi, Marc Sebastian von der Stueck, Felix Barajas Ordonez, Anca-Maria Iancu, Eric Corban, Julia Nowak, Aleksandar Kargaliev, Valeria Perelygina, Anna-Sophie Schott, Daniel Pinto dos Santos, Christiane Kuhl, Daniel Truhn, Sven Nebelung, Robert Siepmann

arxiv logopreprintOct 17 2025
Structured reporting (SR) and artificial intelligence (AI) may transform how radiologists interact with imaging studies. This prospective study (July to December 2024) evaluated the impact of three reporting modes: free-text (FT), structured reporting (SR), and AI-assisted structured reporting (AI-SR), on image analysis behavior, diagnostic accuracy, efficiency, and user experience. Four novice and four non-novice readers (radiologists and medical students) each analyzed 35 bedside chest radiographs per session using a customized viewer and an eye-tracking system. Outcomes included diagnostic accuracy (compared with expert consensus using Cohen's $\kappa$), reporting time per radiograph, eye-tracking metrics, and questionnaire-based user experience. Statistical analysis used generalized linear mixed models with Bonferroni post-hoc tests with a significance level of ($P \le .01$). Diagnostic accuracy was similar in FT ($\kappa = 0.58$) and SR ($\kappa = 0.60$) but higher in AI-SR ($\kappa = 0.71$, $P < .001$). Reporting times decreased from $88 \pm 38$ s (FT) to $37 \pm 18$ s (SR) and $25 \pm 9$ s (AI-SR) ($P < .001$). Saccade counts for the radiograph field ($205 \pm 135$ (FT), $123 \pm 88$ (SR), $97 \pm 58$ (AI-SR)) and total fixation duration for the report field ($11 \pm 5$ s (FT), $5 \pm 3$ s (SR), $4 \pm 1$ s (AI-SR)) were lower with SR and AI-SR ($P < .001$ each). Novice readers shifted gaze towards the radiograph in SR, while non-novice readers maintained their focus on the radiograph. AI-SR was the preferred mode. In conclusion, SR improves efficiency by guiding visual attention toward the image, and AI-prefilled SR further enhances diagnostic accuracy and user satisfaction.

Daniela Vega, Hannah V. Ceballos, Javier S. Vera, Santiago Rodriguez, Alejandra Perez, Angela Castillo, Maria Escobar, Dario Londoño, Luis A. Sarmiento, Camila I. Castro, Nadiezhda Rodriguez, Juan C. Briceño, Pablo Arbeláez

arxiv logopreprintOct 17 2025
Prenatal diagnosis of Congenital Heart Diseases (CHDs) holds great potential for Artificial Intelligence (AI)-driven solutions. However, collecting high-quality diagnostic data remains difficult due to the rarity of these conditions, resulting in imbalanced and low-quality datasets that hinder model performance. Moreover, no public efforts have been made to integrate multiple sources of information, such as imaging and clinical data, further limiting the ability of AI models to support and enhance clinical decision-making. To overcome these challenges, we introduce the Congenital Anomaly Recognition with Diagnostic Images and Unified Medical records (CARDIUM) dataset, the first publicly available multimodal dataset consolidating fetal ultrasound and echocardiographic images along with maternal clinical records for prenatal CHD detection. Furthermore, we propose a robust multimodal transformer architecture that incorporates a cross-attention mechanism to fuse feature representations from image and tabular data, improving CHD detection by 11% and 50% over image and tabular single-modality approaches, respectively, and achieving an F1 score of 79.8 $\pm$ 4.8% in the CARDIUM dataset. We will publicly release our dataset and code to encourage further research on this unexplored field. Our dataset and code are available at https://github.com/BCV-Uniandes/Cardium, and at the project website https://bcv-uniandes.github.io/CardiumPage/

Yang, B., Earnest, T., Bilgel, M., Albert, M. S., Johnson, S. C., Davatzikos, C., Erus, G., Masters, C. L., Resnick, S. M., Miller, M. I., Bakker, A., Morris, J. C., Benzinger, T. L., Gordon, B. A., Sotiras, A., for the Alzheimer's Disease Neuroimaging Initiative,, for the Preclinical Alzheimer's Disease Consortium,

medrxiv logopreprintOct 17 2025
Predicting the likelihood of developing Alzheimers disease (AD) dementia in at-risk individuals is important for the design of and optimal recruitment for clinical trials of disease-modifying therapies. Machine learning (ML) has been shown to excel in this task; however, there remains a lack of models developed specifically for the preclinical AD population, who display early signs of abnormal brain amyloidosis but remain cognitively unimpaired. Here, we trained and evaluated ML classifiers to predict whether individuals with preclinical AD will progress to mild cognitive impairment or dementia within multiple fixed time windows, ranging from one to five years. Models were trained on regional imaging features extracted from amyloid positron emission tomography and magnetic resonance imaging pooled across seven independent sites and from two amyloid radiotracers ([18F]-florbetapir and [11C]-Pittsburgh-compound-B). Out-of-sample generalizability was evaluated via a leave-one-site-out and leave-one-tracer-out cross-validation. Classifiers achieved an out-of-sample receiver operating characteristic area-under-the-curve of 0.66 or greater when applied to all except one hold-out sites and 0.72 or greater when applied to each hold-out radiotracer. Additionally, when applying our models in a retroactive cohort enrichment analysis on A4 clinical trial data, we observed increased statistical power of detecting differences in amyloid accumulation between placebo and treatment arms after enrichment by ML stratifications. As emerging investigations of new disease-modifying therapies for AD increasingly focus on asymptomatic, preclinical populations, our findings underscore the potential applicability of ML-based patient stratification for recruiting more homogeneous cohorts and improving statistical power for detecting treatment effects for future clinical trials. HighlightsO_LIMachine learning can predict future cognitive impairment in preclinical Alzheimers C_LIO_LIModels achieved high out-of-sample ROC-AUC on external sites and PET tracers C_LIO_LIModels were able to distinguish cognitively stable from decliners in the A4 cohort C_LIO_LIML cohort enrichment enhanced secondary treatment effect detection in the A4 cohort C_LI

Rose, H. E. L., Thorpe, J. C., Panek, R., Goncalves, E., Morgan, P. S.

medrxiv logopreprintOct 17 2025
This study investigated whether readily available, generative AI models, could be used to answer MR safety queries as an MR Safety Expert (MRSE), with "clinical usability" assessed by an expert review panel. This study is a mixed retrospective-prospective, proof-of-concept study. A clinical MR safety advice archive (January 2024 to April 2025) was used to curate 30 generic MR safety support requests with associated MRSE responses. ChatGPT-4o (ChatGPT) and Google AI Overview (GAIO) were prompted with these generic requests to generate AI safety advice. An expert panel assessed all answers for clinical usability. Unusable responses were assigned as; "Unsafe Advice", "Safe but Incorrect", "Incomplete Advice/ Key Details Missing", "Contradictory Statements"," Out of Date". Requests were subcategorised into "specific" and "generic" requests, as well as "passive" and "active" implants, and "other" requests for post review analysis. Percentages of usable answers and reasons for non-usable responses were compared. Overall, 93% (28/30) of the human responses, 50% (15/30) of the GAIO responses and 43% (13/30) of the ChatGPT responses were deemed acceptable for clinical use. Subcategorization usability was: "generic"; Human 94% (16/17), GAIO and ChatGPT 59% (10/17), "specific": Human 92% (12/13), GAIO 38% (5/13), ChatGPT 23% (3/13), Active: Human 100% (9/9), GAIO 33% (3/9), ChatGPT 22% (2/9), "passive": Human 88% (14/16), GAIO 56% (9/16), ChatGPT 50% (8/16) and "other"; Human 100% (5/5), GAIO and ChatGPT 60% (3/5). While both AIs were able provide clinically acceptable answers for some requests they did so at a significantly lower success rate than a human MRSE.

Novak, A., Shah, R., Espinosa Morgado, A. T., Robert, D., Kumar, S., Oke, J., Bhatia, K., Romsauerova, A., Das, T., Narbone, M., Dharmadhikari, R., Harrison, M., Vimalesvaran, K., Gooch, J., Woznitza, N., Lowe, D. J., Shuaib, H., Ather, S., AI-REACT Reader Study Group,

medrxiv logopreprintOct 17 2025
BackgroundNon-contrast CT head scans (NCCTH) are the most frequently requested cross-sectional imaging in the Emergency Department. While AI tools have been developed to detect NCCTH abnormalities, most validation studies compare AI to radiologists, with limited evidence on the impact of AI assistance for other healthcare professionals. ObjectiveTo evaluate whether an AI-powered tool improves the accuracy, speed, and confidence of general radiologists, emergency clinicians, and radiographers in detecting critical abnormalities on NCCTH, and to assess the tools stand-alone performance and factors influencing diagnostic accuracy and efficiency. MethodsA retrospective dataset of 150 NCCTH (52 normal, 98 with critical abnormalities: intracranial haemorrhage, hypodensity, midline shift, mass effect, or skull fracture) was reviewed by 30 readers (10 radiologists, 15 emergency clinicians, 5 radiographers) from four NHS trusts. Each reader interpreted scans first unaided, then with the qER EU 2.0 AI tool, separated by a 2-week washout. Ground truth was established by consensus of two neuroradiologists. We assessed the stand-alone performance of qER and its effect on reader diagnostic accuracy, confidence, and interpretation speed. ResultsThe qER algorithm demonstrated strong diagnostic performance across most pathology subgroups (AUC 0.821-0.976). With AI assistance, pooled reader sensitivity for critically abnormal scans increased from 82.8% to 89.7% (+6.9%, 95% CI +1.4% to +10.6%, p<0.001), and for intracranial haemorrhage from 84.6% to 91.6% (+7.0%, 95% CI +3.2% to +10.8%, p<0.001), but specificity decreased from 84.5% to 78.9% (-5.5%, 95% CI -11.0% to -0.09%, p=0.046). Reader confidence AUC did not change significantly. ED clinicians with AI achieved sensitivity comparable to unaided radiologists, with no significant change in specificity. ConclusionAI-assisted interpretation increased reader sensitivity for critical abnormalities but reduced specificity. Notably, AI assistance enabled ED clinicians to reach diagnostic sensitivity similar to unaided radiologists, supporting the potential for AI to extend the diagnostic capabilities of non-radiologists. Further prospective studies are warranted to confirm these findings in real-world settings. FundingThis study was funded by Qure.ai via an NHSX Award EthicsThe study has been approved by the UK Healthcare Research Authority (IRAS 310995, approved 13/12/2022). The use of anonymised retrospective NCCTH has been authorised by Oxford University Hospitals. Trial registration numberNCT06018545. Research in contextO_ST_ABSWhat is already known on this topicC_ST_ABSAI-derived algorithms for the detection of pathological findings on non-contrast CT head (NCCTH) images have previously demonstrated strong diagnostic performance when used on retrospective datasets. AI-assisted image interpretation using these algorithms has been shown to enhance the diagnostic performance of general and neuro-radiologists in silico. The potential for AI to enhance the performance of less skilled readers who may encounter and be required to act on these images in clinical practice (e.g. non-specialist radiologists, emergency medicine clinicians and radiographers) is as yet untested, however. What this study addsThis large multicase multireader study demonstrates that AI-assisted image interpretation may be used to enhance the in silico diagnostic performance of Emergency Department physicians to a level comparable to that of general radiologists. How this study might affect research, practice or policyThis study raises the possibility that AI-assisted image interpretation could be used to assist non-radiologist clinicians in the safe interpretation of NCCTH scans. Further prospective research is required to test this hypothesis in clinical practice and explore the potential for AI-assisted interpretation to support safe discharge of patients with normal or low-risk scans.

Nuwagira, B., Rodriguez, A., Li, Q., Coskunuzer, B.

medrxiv logopreprintOct 17 2025
In this paper, we investigate the integration of topological data analysis (TDA) techniques with deep learning (DL) models to improve breast cancer diagnosis from ultrasound images. By leveraging persistent homology, a TDA method that captures global structural patterns, we enrich the local spatial features typically learned by DL models. We in-corporate topological features into various pre-trained architectures, including CNNs and vision transformers (VTs), aiming to enhance screening accuracy for breast cancer, which remains the most common cancer among women. Experiments on publicly available ultrasound datasets demonstrate that combining CNNs and VTs with topological features consistently yields statistically significant performance improvements. Notably, this approach also helps to address challenges faced by DL models, such as interpretability and reliance on large labeled datasets. Further-more, we generalize the Alexander duality theorem to cubical persistence, showing that persistent homology remains invariant under sublevel and superlevel filtrations for image data. This advancement reduces computational costs, making TDA methods more practical for image analysis.
Page 19 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.