Sort by:
Page 183 of 6526512 results

Andrew Bell, Yan Kit Choi, Steffen E Petersen, Andrew King, Muhummad Sohaib Nazir, Alistair A Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.

Mustafa Khanbhai, Giulia Di Nardo, Jun Ma, Vivienne Freitas, Caterina Masino, Ali Dolatabadi, Zhaoxun "Lorenz" Liu, Wey Leong, Wagner H. Souza, Amin Madani

arxiv logopreprintSep 10 2025
Effective preoperative planning requires accurate algorithms for segmenting anatomical structures across diverse datasets, but traditional models struggle with generalization. This study presents a novel machine learning methodology to improve algorithm generalization for 3D anatomical reconstruction beyond breast cancer applications. We processed 120 retrospective breast MRIs (January 2018-June 2023) through three phases: anonymization and manual segmentation of T1-weighted and dynamic contrast-enhanced sequences; co-registration and segmentation of whole breast, fibroglandular tissue, and tumors; and 3D visualization using ITK-SNAP. A human-in-the-loop approach refined segmentations using U-Mamba, designed to generalize across imaging scenarios. Dice similarity coefficient assessed overlap between automated segmentation and ground truth. Clinical relevance was evaluated through clinician and patient interviews. U-Mamba showed strong performance with DSC values of 0.97 ($\pm$0.013) for whole organs, 0.96 ($\pm$0.024) for fibroglandular tissue, and 0.82 ($\pm$0.12) for tumors on T1-weighted images. The model generated accurate 3D reconstructions enabling visualization of complex anatomical features. Clinician interviews indicated improved planning, intraoperative navigation, and decision support. Integration of 3D visualization enhanced patient education, communication, and understanding. This human-in-the-loop machine learning approach successfully generalizes algorithms for 3D reconstruction and anatomical segmentation across patient datasets, offering enhanced visualization for clinicians, improved preoperative planning, and more effective patient education, facilitating shared decision-making and empowering informed patient choices across medical applications.

Niyogi SG, Nag DS, Shah MM, Swain A, Naskar C, Srivastava P, Kant R

pubmed logopapersSep 9 2025
This mini-review explores the transformative potential of artificial intelligence (AI) in improving the diagnosis, management, and long-term care of congenital heart diseases (CHDs). AI offers significant advancements across the spectrum of CHD care, from prenatal screening to postnatal management and long-term monitoring. Using AI algorithms, enhanced fetal echocardiography, and genetic tests improves prenatal diagnosis and risk stratification. Postnatally, AI revolutionizes diagnostic imaging analysis, providing more accurate and efficient identification of CHD subtypes and severity. Compared with traditional methods, advanced signal processing techniques enable a more precise assessment of hemodynamic parameters. AI-driven decision support systems tailor treatment strategies, thereby optimizing therapeutic interventions and predicting patient outcomes with greater accuracy. This personalized approach leads to better clinical outcomes and reduced morbidity. Furthermore, AI-enabled remote monitoring and wearable devices facilitate ongoing surveillance, thereby enabling early detection of complications and provision of prompt interventions. This continuous monitoring is crucial in the immediate postoperative period and throughout the patient's life. Despite the immense potential of AI, challenges remain. These include the need for standardized datasets, the development of transparent and understandable AI algorithms, ethical considerations, and seamless integration into existing clinical workflows. Overcoming these obstacles through collaborative data sharing and responsible implementation will unlock the full potential of AI to improve the lives of patients with CHD, ultimately leading to better patient outcomes and improved quality of life.

Wu R, Cheng J, Li C, Zou J, Fan W, Ma X, Guo H, Liang Y, Wang S

pubmed logopapersSep 9 2025
Diffusion magnetic resonance imaging (dMRI) often suffers from low spatial and angular resolution due to inherent limitations in imaging hardware and system noise, adversely affecting the accurate estimation of microstructural parameters with fine anatomical details. Deep learning-based super-resolution techniques have shown promise in enhancing dMRI resolution without increasing acquisition time. However, most existing methods are confined to either spatial or angular super-resolution, disrupting the information exchange between the two domains and limiting their effectiveness in capturing detailed microstructural features. Furthermore, traditional pixel-wise loss functions only consider pixel differences, and struggle to recover intricate image details essential for high-resolution reconstruction. We propose SHRL-dMRI, a novel Spherical Harmonics Representation Learning framework for high-fidelity, generalizable super-resolution in dMRI to address these challenges. SHRL-dMRI explores implicit neural representations and spherical harmonics to model continuous spatial and angular representations, simultaneously enhancing both spatial and angular resolution while improving the accuracy of microstructural parameter estimation. To further preserve image fidelity, a data-fidelity module and wavelet-based frequency loss are introduced, ensuring the super-resolved images preserve image consistency and retain fine details. Extensive experiments demonstrate that, compared to five other state-of-the-art methods, our method significantly enhances dMRI data resolution, improves the accuracy of microstructural parameter estimation, and provides better generalization capabilities. It maintains stable performance even under a 45× downsampling factor. The proposed method can effectively improve the resolution of dMRI data without increasing the acquisition time, providing new possibilities for future clinical applications.

Khazanchi R, Chen AR, Desai P, Herrera D, Staub JR, Follett MA, Krushelnytskyy M, Kemeny H, Hsu WK, Patel AA, Divi SN

pubmed logopapersSep 9 2025
To assess the ability of large language models (LLMs) to accurately simplify lumbar spine magnetic resonance imaging (MRI) reports. Patients who underwent lumbar decompression and/or fusion surgery in 2022 at one tertiary academic medical center were queried using appropriate CPT codes. We then identified all patients with a preoperative ICD diagnosis of lumbar spondylolisthesis and extracted the latest preoperative spine MRI radiology report text. The GPT-4 API was deployed on deidentified reports with a prompt to produce translations and evaluated for accuracy and readability. An enhanced GPT prompt was constructed using high-scoring reports and evaluated on low-scoring reports. Of 93 included reports, GPT effectively reduced the average reading level (11.47 versus 8.50, p < 0.001). While most reports had no accuracy issues, 34% of translations omitted at least one clinically relevant piece of information, while 6% produced a clinically significant inaccuracy in the translation. An enhanced prompt model using high scoring reports-maintained reading level while significantly improving omission rate (p < 0.0001). However, even in the enhanced prompt model, GPT made several errors regarding location of stenosis, description of prior spine surgery, and description of other spine pathologies. GPT-4 effectively simplifies the reading level of lumbar spine MRI reports. The model tends to omit key information in its translations, which can be mitigated with enhanced prompting. Further validation in the domain of spine radiology needs to be performed to facilitate clinical integration.

Moger TA, Nardin SB, Holen ÅS, Moshina N, Hofvind S

pubmed logopapersSep 9 2025
ObjectiveTo study the implications of implementing artificial intelligence (AI) as a decision support tool in the Norwegian breast cancer screening program concerning cost-effectiveness and time savings for radiologists.MethodsIn a decision tree model using recent data from AI vendors and the Cancer Registry of Norway, and assuming equal effectiveness of radiologists plus AI compared to standard practice, we simulated costs, effects and radiologist person-years over the next 20 years under different scenarios: 1) Assuming a €1 additional running cost of AI instead of the €3 assumed in the base case, 2) varying the AI-score thresholds for single vs. double readings, 3) varying the consensus and recall rates, and 4) reductions in the interval cancer rate compared to standard practice.ResultsAI was unlikely to be cost-effective, even when only one radiologist was used alongside AI for all screening exams. This also applied when assuming a 10% reduction in the consensus and recall rates. However, there was a 30-50% reduction in the radiologists' screen-reading volume. Assuming an additional running cost of €1 for AI, the costs were comparable, with similar probabilities of cost-effectiveness for AI and standard practice. Assuming a 5% reduction in the interval cancer rate, AI proved to be cost-effective across all willingness-to-pay values.ConclusionsAI may be cost-effective if the interval cancer rate is reduced by 5% or more, or if its additional cost is €1 per screening exam. Despite a substantial reduction in screening volume, this remains modest relative to the total radiologist person-years available within breast centers, accounting for only 3-4% of person-years.

Liu J, Sun P, Yuan Y, Chen Z, Tian K, Gao Q, Li X, Xia L, Zhang J, Xu N

pubmed logopapersSep 9 2025
Lateral malleolar avulsion fracture (LMAF) and subfibular ossicle (SFO) are distinct entities that both present as small bone fragments near the lateral malleolus on imaging, yet require different treatment strategies. Clinical and radiological differentiation is challenging, which can impede timely and precise management. On imaging, magnetic resonance imaging (MRI) is the diagnostic gold standard for differentiating LMAF from SFO, whereas radiological differentiation on computed tomography (CT) alone is challenging in routine practice. Deep convolutional neural networks (DCNNs) have shown promise in musculoskeletal imaging diagnostics, but robust, multicenter evidence in this specific context is lacking. To evaluate several state-of-the-art DCNNs-including the latest YOLOv12 algorithm - for detecting and classifying LMAF and SFO on CT images, using MRI-based diagnoses as the gold standard, and to compare model performance with radiologists reading CT alone. In this retrospective study, 1,918 patients (LMAF: 1253, SFO: 665) were enrolled from two hospitals in China between 2014 and 2024. MRI served as the gold standard and was independently interpreted by two senior musculoskeletal radiologists. Only CT images were used for model training, validation, and testing. CT images were manually annotated with bounding boxes. The cohort was randomly split into a training set (n=1,092), internal validation set (n=476), and external test set (n=350). Four deep learning models - Faster R-CNN, SSD, RetinaNet, and YOLOv12 - were trained and evaluated using identical procedures. Model performance was assessed using mean average precision at IoU=0.5 (mAP50), area under the receiver-operating curve (AUC), accuracy, sensitivity, and specificity. The external test set was also independently interpreted by two musculoskeletal radiologists with 7 and 15 years of experience, with results compared to the best performing model. Saliency maps were generated using Shapley values to enhance interpretability. Among the evaluated models, YOLOv12 achieved the highest detection and classification performance, with a mAP50 of 92.1% and an AUC of 0.983 on the external test set - significantly outperforming Faster R-CNN (mAP50: 63.7%, AUC: 0.79), SSD (mAP50 63.0%, AUC 0.63), and RetinaNet (mAP50: 67.0%, AUC: 0.73) (all P < .05). When using CT alone, radiologists performed at a moderate level (accuracy: 75.6%/69.1%; sensitivity: 75.0%/65.2%; specificity: 76.0%/71.1%), whereas YOLOv12 approached MRI-based reference performance (accuracy: 92.0%; sensitivity: 86.7%; specificity: 82.2%). Saliency maps corresponded well with expert-identified regions. While MRI (read by senior radiologists) is the gold standard for distinguishing LMAF from SFO, CT-based differentiation is challenging for radiologists. A CT-only DCNN (YOLOv12) achieved substantially higher performance than radiologists reading CT alone and approached the MRI-based reference standard, highlighting its potential to augment CT-based decision-making where MRI is limited or unavailable.

Łajczak P, Sahin OK, Matyja J, Puglla Sanchez LR, Sayudo IF, Ayesha A, Lopes V, Majeed MW, Krishna MM, Joseph M, Pereira M, Obi O, Silva R, Lecchi C, Schincariol M

pubmed logopapersSep 9 2025
Myocarditis is an inflammation of heart tissue. Cardiovascular magnetic resonance imaging (CMR) has emerged as an important non-invasive imaging tool for diagnosing myocarditis, however, interpretation remains a challenge for novice physicians. Advancements in machine learning (ML) models have further improved diagnostic accuracy, demonstrating good performance. Our study aims to assess the diagnostic accuracy of ML in identifying myocarditis using CMR. A systematic search was performed using PubMed, Embase, Web of Science, Cochrane, and Scopus to identify studies reporting the diagnostic accuracy of ML in the detection of myocarditis using CMR. The included studies evaluated both image-based and report-based assessments using various ML models. Diagnostic accuracy was estimated using a Random-Effects model (R software). We found a total of 141 ML model results from a total of 12 studies, which were included in the systematic review. The best models achieved 0.93 (95% Confidence Interval (CI) 0.88-0.96) sensitivity and 0.95 (95% CI 0.89-0.97) specificity. Pooled area under the curve was 0.97 (95% CI 0.93-0.98). Comparisons with human physicians showed comparable results for diagnostic accuracy of myocarditis. Quality assessment concerns and heterogeneity were present. CMR augmented using ML models with advanced algorithms can provide high diagnostic accuracy for myocarditis, even surpassing novice CMR radiologists. However, high heterogeneity, quality assessment concerns, and lack of information on cost-effectiveness may limit the clinical implementation of ML. Future investigations should explore cost-effectiveness and minimize biases in their methodologies.

Zhao J, Liang L, Li J, Li Q, Li F, Niu L, Xue C, Fu W, Liu Y, Song S, Liu X

pubmed logopapersSep 9 2025
Double expression lymphoma (DEL) is an independent high-risk prognostic factor for primary CNS lymphoma (PCNSL), and its diagnosis currently relies on invasive methods. This study first integrates radiomics and habitat radiomics features to enhance preoperative DEL status prediction models via intratumoral heterogeneity analysis. Clinical, pathological, and MRI imaging data of 139 PCNSL patients from two independent centers were collected. Radiomics, habitat radiomics, and combined models were constructed using machine learning classifiers, including KNN, DT, LR, and SVM. The AUC in the test set was used to evaluate the optimal predictive model. DCA curve and calibration curve were employed to evaluate the predictive performance of the models. SHAP analysis was utilized to visualize the contribution of each feature in the optimal model. For the radiomics-based models, the Combined radiomics model constructed by LR demonstrated better performance, with the AUC of 0.8779 (95% CI: 0.8171-0.9386) in the training set and 0.7166 (95% CI: 0.497-0.9361) in the test set. The Habitat radiomics model (SVM) based on T1-CE showed an AUC of 0.7446 (95% CI: 0.6503- 0.8388) in the training set and 0.7433 (95% CI: 0.5322-0.9545) in the test set. Finally, the Combined all model exhibited the highest predictive performance: LR achieved AUC values of 0.8962 (95% CI: 0.8299-0.9625) and 0.8289 (95% CI: 0.6785-0.9793) in training and test sets, respectively. The Combined all model developed in this study can provide effective reference value in predicting the DEL status of PCNSL, and habitat radiomics significantly enhances the predictive efficacy.

Liu X, Sun L, Li C, Han B, Jiang W, Yuan T, Liu W, Liu Z, Yu Z, Liu B

pubmed logopapersSep 9 2025
Mammography is a primary method for early screening, and developing deep learning-based computer-aided systems is of great significance. However, current deep learning models typically treat each image as an independent entity for diagnosis, rather than integrating images from multiple views to diagnose the patient. These methods do not fully consider and address the complex interactions between different views, resulting in poor diagnostic performance and interpretability. To address this issue, this paper proposes a novel end-to-end framework for breast cancer diagnosis: lesion asymmetry screening assisted global awareness multi-view network (LAS-GAM). More than just the most common image-level diagnostic model, LAS-GAM operates at the patient level, simulating the workflow of radiologists analyzing mammographic images. The framework processes the four views of a patient and revolves around two key modules: a global module and a lesion screening module. The global module simulates the comprehensive assessment by radiologists, integrating complementary information from the craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts to generate global features that represent the patient's overall condition. The lesion screening module mimics the process of locating lesions by comparing symmetric regions in contralateral views, identifying potential lesion areas and extracting lesion-specific features using a lightweight model. By combining the global features and lesion-specific features, LAS-GAM simulates the diagnostic process, making patient-level predictions. Moreover, it is trained using only patient-level labels, significantly reducing data annotation costs. Experiments on the Digital Database for Screening Mammography (DDSM) and In-house datasets validate LAS-GAM, achieving AUCs of 0.817 and 0.894, respectively.
Page 183 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.