Sort by:
Page 197 of 6526512 results

Hasanabadi S, Aghamiri SMR, Abin AA, Cheraghi M, Bakhshayesh Karam M, Vosoughi H, Emami F, Arabi H

pubmed logopapersSep 3 2025
Lymphoma staging plays a pivotal role in treatment planning and prognosis. Yet, it still relies on manual interpretation of PET/computed tomography (CT) images, which is time-consuming, subjective, and prone to variability. This study introduces a novel radiomics-based machine learning model for automated lymphoma staging to improve diagnostic accuracy and streamline clinical workflow. Imaging data from 241 patients with histologically confirmed lymphoma were retrospectively analyzed. Radiomics features were extracted from segmented lymph nodes and extranodal lesions using PET/CT. Three machine learning classifiers (Logistic Regression, Random Forest, and XGBoost) were trained to distinguish between early-stage (I-II) and advanced-stage (III-IV) lymphoma. Model performance was evaluated using area under the curve (AUC), sensitivity, specificity, and accuracy together with survival analysis. Among the three models evaluated, the logistic regression model incorporating both nodal and extranodal radiomic features performed the best, achieving an AUC of 0.87 and a sensitivity of 0.88 in the external validation cohort. Including extranodal features significantly improved classification accuracy compared to nodal-only models (AUC: 0.87 vs. 0.75). Survival analysis revealed advanced-stage patients had a fourfold higher mortality risk (hazard ratio: 0.22-0.26, P = 0.0036) and a median survival of 84 months. Key radiomic features, such as tumor shape irregularity and heterogeneity, were strongly associated with staging, aligning with Lugano criteria for extranodal spread. This study demonstrated the potential of PET radiomics features for automated Lugano staging. Adding extranodal features significantly improved staging accuracy and informed treatment.

Wang YK, Klanecek Z, Wagner T, Cockmartin L, Marshall N, Studen A, Jeraj R, Bosmans H

pubmed logopapersSep 3 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations, and contribute meaningfully to the prediction. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29,374 screening examinations with mammograms (10,415 women, mean age at examination 60 [SD: 11] years) from the EMBED Dataset (2013-2020) were used to evaluate feature importance using a feature-centric explainable AI pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time-to-cancer > 6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population, 1-year AUC: Mirai, 0.81 [95% CI: 0.78, 0.84], CalcMirai, 0.76 [95% CI: 0.73, 0.80]; MassMirai, 0.74 [95% CI: 0.71, 0.78]; <i>P</i> values < 0.001). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population, 5-year AUC: Mirai, 0.66 [95% CI: 0.63, 0.69], CalcMirai, 0.66 [95% CI: 0.64, 0.69]; <i>P</i> value: 0.71); however, MassMirai achieved lower performance than Mirai (AUC, 0.57 [95% CI: 0.54, 0.60]; <i>P</i> value < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. ©RSNA, 2025.

Chen IE, Joines M, Capiro N, Dawar R, Sears C, Sayre J, Chalfant J, Fischer C, Hoyt AC, Hsu W, Milch HS

pubmed logopapersSep 3 2025
<b>Background:</b> By reliably classifying screening mammograms as negative, artificial intelligence (AI) could minimize radiologists' time spent reviewing high volumes of normal examinations and help prioritize examinations with high likelihood of malignancy. <b>Objective:</b> To compare performance of AI, classified as positive at different thresholds, with that of radiologists, focusing on NPV and recall rates, in large population-based digital mammography (DM) and digital breast tomosynthesis (DBT) screening cohorts. <b>Methods:</b> This retrospective single-institution study included women enrolled in the observational population-based Athena Breast Health Network. Stratified random sampling was used to identify cohorts of DM and DBT screening examinations performed from January 2010 through December 2019. Radiologists' interpretations were extracted from clinical reports. A commercial AI system classified examinations as low, intermediate, or elevated risk. Breast cancer diagnoses within 1 year after screening examinations were identified from a state cancer registry. AI and radiologist performance were compared. <b>Results:</b> The DM cohort included 26,693 examinations in 20,409 women (mean age, 58.1 years). AI classified 58.2%, 27.7%, and 14.0% of examinations as low, intermediate, and elevated risk, respectively. Sensitivity, specificity, recall rate and NPV for radiologists were 88.6%, 93.3%, 7.2%, and 99.9%; for AI (defining positive as elevated risk), 74.4%, 86.3%, 14.0%, and 99.8%; and for AI (defining positive as intermediate/elevated risk), 94.0%, 58.6%, 41.8%, and 99.9%. The DBT cohort included 4824 examinations in 4379 women (mean age, 61.3 years). AI classified 68.1%, 19.8%, and 12.1% of examinations as low, intermediate, and elevated risk, respectively. Sensitivity, specificity, recall rate, and NPV for radiologists were 83.8%, 93.7%, 6.9%, and 99.9%; for AI (defining positive results as elevated risk), 78.4%, 88.4%, 12.1%, and 99.8%; and for AI (defining positive results as intermediate/elevated risk), 89.2%, 68.5%, 31.9%, and 99.8%. <b>Conclusion:</b> In large DM and DBT cohorts, AI at either diagnostic threshold achieved high NPV but had higher recall rates than radiologists. Defining positive AI results to include intermediate-risk examinations, versus only elevated-risk examinations, detected additional cancers but yielded markedly increased recall rates. <b>Clinical Impact:</b> The findings support AI's potential to aid radiologists' workflow efficiency. Yet, strategies are needed to address frequent false-positive results, particularly in the intermediate-risk category.

Mingfeng Lin

arxiv logopreprintSep 3 2025
Coronary artery disease is a leading cause of mortality, underscoring the critical importance of precise diagnosis through X-ray angiography. Manual coronary artery segmentation from these images is time-consuming and inefficient, prompting the development of automated models. However, existing methods, whether rule-based or deep learning models, struggle with issues like poor performance and limited generalizability. Moreover, current knowledge distillation methods applied in this field have not fully exploited the hierarchical knowledge of the model, leading to certain information waste and insufficient enhancement of the model's performance capabilities for segmentation tasks. To address these issues, this paper introduces Deep Self-knowledge Distillation, a novel approach for coronary artery segmentation that leverages hierarchical outputs for supervision. By combining Deep Distribution Loss and Pixel-wise Self-knowledge Distillation Loss, our method enhances the student model's segmentation performance through a hierarchical learning strategy, effectively transferring knowledge from the teacher model. Our method combines a loosely constrained probabilistic distribution vector with tightly constrained pixel-wise supervision, providing dual regularization for the segmentation model while also enhancing its generalization and robustness. Extensive experiments on XCAD and DCA1 datasets demonstrate that our approach outperforms the dice coefficient, accuracy, sensitivity and IoU compared to other models in comparative evaluations.

Pengyang Yu, Haoquan Wang, Gerard Marks, Tahar Kechadi, Laurence T. Yang, Sahraoui Dhelim, Nyothiri Aung

arxiv logopreprintSep 3 2025
Accurate skin-lesion segmentation remains a key technical challenge for computer-aided diagnosis of skin cancer. Convolutional neural networks, while effective, are constrained by limited receptive fields and thus struggle to model long-range dependencies. Vision Transformers capture global context, yet their quadratic complexity and large parameter budgets hinder use on the small-sample medical datasets common in dermatology. We introduce the MedLiteNet, a lightweight CNN Transformer hybrid tailored for dermoscopic segmentation that achieves high precision through hierarchical feature extraction and multi-scale context aggregation. The encoder stacks depth-wise Mobile Inverted Bottleneck blocks to curb computation, inserts a bottleneck-level cross-scale token-mixing unit to exchange information between resolutions, and embeds a boundary-aware self-attention module to sharpen lesion contours.

Midhat Urooj, Ayan Banerjee, Farhat Shaikh, Kuntal Thakur, Sandeep Gupta

arxiv logopreprintSep 3 2025
Domain generalization remains a critical challenge in medical imaging, where models trained on single sources often fail under real-world distribution shifts. We propose KG-DG, a neuro-symbolic framework for diabetic retinopathy (DR) classification that integrates vision transformers with expert-guided symbolic reasoning to enable robust generalization across unseen domains. Our approach leverages clinical lesion ontologies through structured, rule-based features and retinal vessel segmentation, fusing them with deep visual representations via a confidence-weighted integration strategy. The framework addresses both single-domain generalization (SDG) and multi-domain generalization (MDG) by minimizing the KL divergence between domain embeddings, thereby enforcing alignment of high-level clinical semantics. Extensive experiments across four public datasets (APTOS, EyePACS, Messidor-1, Messidor-2) demonstrate significant improvements: up to a 5.2% accuracy gain in cross-domain settings and a 6% improvement over baseline ViT models. Notably, our symbolic-only model achieves a 63.67% average accuracy in MDG, while the complete neuro-symbolic integration achieves the highest accuracy compared to existing published baselines and benchmarks in challenging SDG scenarios. Ablation studies reveal that lesion-based features (84.65% accuracy) substantially outperform purely neural approaches, confirming that symbolic components act as effective regularizers beyond merely enhancing interpretability. Our findings establish neuro-symbolic integration as a promising paradigm for building clinically robust, and domain-invariant medical AI systems.

Hongxu Yang, Edina Timko, Levente Lippenszky, Vanda Czipczer, Lehel Ferenczi

arxiv logopreprintSep 3 2025
Synthetic tumors in medical images offer controllable characteristics that facilitate the training of machine learning models, leading to an improved segmentation performance. However, the existing methods of tumor synthesis yield suboptimal performances when tumor occupies a large spatial volume, such as breast tumor segmentation in MRI with a large field-of-view (FOV), while commonly used tumor generation methods are based on small patches. In this paper, we propose a 3D medical diffusion model, called SynBT, to generate high-quality breast tumor (BT) in contrast-enhanced MRI images. The proposed model consists of a patch-to-volume autoencoder, which is able to compress the high-resolution MRIs into compact latent space, while preserving the resolution of volumes with large FOV. Using the obtained latent space feature vector, a mask-conditioned diffusion model is used to synthesize breast tumors within selected regions of breast tissue, resulting in realistic tumor appearances. We evaluated the proposed method for a tumor segmentation task, which demonstrated the proposed high-quality tumor synthesis method can facilitate the common segmentation models with performance improvement of 2-3% Dice Score on a large public dataset, and therefore provides benefits for tumor segmentation in MRI images.

Junhao Jia, Yifei Sun, Yunyou Liu, Cheng Yang, Changmiao Wang, Feiwei Qin, Yong Peng, Wenwen Min

arxiv logopreprintSep 3 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for probing brain function, yet reliable clinical diagnosis is hampered by low signal-to-noise ratios, inter-subject variability, and the limited frequency awareness of prevailing CNN- and Transformer-based models. Moreover, most fMRI datasets lack textual annotations that could contextualize regional activation and connectivity patterns. We introduce RTGMFF, a framework that unifies automatic ROI-level text generation with multimodal feature fusion for brain-disorder diagnosis. RTGMFF consists of three components: (i) ROI-driven fMRI text generation deterministically condenses each subject's activation, connectivity, age, and sex into reproducible text tokens; (ii) Hybrid frequency-spatial encoder fuses a hierarchical wavelet-mamba branch with a cross-scale Transformer encoder to capture frequency-domain structure alongside long-range spatial dependencies; and (iii) Adaptive semantic alignment module embeds the ROI token sequence and visual features in a shared space, using a regularized cosine-similarity loss to narrow the modality gap. Extensive experiments on the ADHD-200 and ABIDE benchmarks show that RTGMFF surpasses current methods in diagnostic accuracy, achieving notable gains in sensitivity, specificity, and area under the ROC curve. Code is available at https://github.com/BeistMedAI/RTGMFF.

Mattia Litrico, Francesco Guarnera, Mario Valerio Giuffrida, Daniele Ravì, Sebastiano Battiato

arxiv logopreprintSep 3 2025
Generating realistic MRIs to accurately predict future changes in the structure of brain is an invaluable tool for clinicians in assessing clinical outcomes and analysing the disease progression at the patient level. However, current existing methods present some limitations: (i) some approaches fail to explicitly capture the relationship between structural changes and time intervals, especially when trained on age-imbalanced datasets; (ii) others rely only on scan interpolation, which lack clinical utility, as they generate intermediate images between timepoints rather than future pathological progression; and (iii) most approaches rely on 2D slice-based architectures, thereby disregarding full 3D anatomical context, which is essential for accurate longitudinal predictions. We propose a 3D Temporally-Aware Diffusion Model (TADM-3D), which accurately predicts brain progression on MRI volumes. To better model the relationship between time interval and brain changes, TADM-3D uses a pre-trained Brain-Age Estimator (BAE) that guides the diffusion model in the generation of MRIs that accurately reflect the expected age difference between baseline and generated follow-up scans. Additionally, to further improve the temporal awareness of TADM-3D, we propose the Back-In-Time Regularisation (BITR), by training TADM-3D to predict bidirectionally from the baseline to follow-up (forward), as well as from the follow-up to baseline (backward). Although predicting past scans has limited clinical applications, this regularisation helps the model generate temporally more accurate scans. We train and evaluate TADM-3D on the OASIS-3 dataset, and we validate the generalisation performance on an external test set from the NACC dataset. The code will be available upon acceptance.

Vachha BA, Kumar VA, Pillai JJ, Shimony JS, Tanabe J, Sair HI

pubmed logopapersSep 3 2025
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly used in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers and variability in results interpretation complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. This <i>AJR</i> Expert Panel Narrative Review summarizes the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. Ongoing controversies and limitations in clinical applicability are presented and future directions are discussed, including the developing role of rs-fMRI in neuromodulation treatment of various neurologic disorders.
Page 197 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.