Sort by:
Page 42 of 1601600 results

Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis.

Wang L, Fatemi M, Alizad A

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.

Deep learning for liver lesion segmentation and classification on staging CT scans of colorectal cancer patients: a multi-site technical validation study.

Bashir U, Wang C, Smillie R, Rayabat Khan AK, Tamer Ahmed H, Ordidge K, Power N, Gerlinger M, Slabaugh G, Zhang Q

pubmed logopapersJun 1 2025
To validate a liver lesion detection and classification model using staging computed tomography (CT) scans of colorectal cancer (CRC) patients. A UNet-based deep learning model was trained on 272 public liver tumour CT scans and tested on 220 CRC staging CTs acquired from a single institution (2014-2019). Performance metrics included lesion detection rates by size (<10 mm, 10-20 mm, >20 mm), segmentation accuracy (dice similarity coefficient, DSC), volume measurement agreement (Bland-Altman limits of agreement, LOAs; intraclass correlation coefficient, ICC), and classification accuracy (malignant vs benign) at patient and lesion levels (detected lesions only). The model detected 743 out of 884 lesions (84%), with detection rates of 75%, 91.3%, and 96% for lesions <10 mm, 10-20 mm, and >20 mm, respectively. The median DSC was 0.76 (95% CI: 0.72-0.80) for lesions <10 mm, 0.83 (95% CI: 0.79-0.86) for 10-20 mm, and 0.85 (95% CI: 0.82-0.88) for >20 mm. Bland-Altman analysis showed a mean volume bias of -0.12 cm<sup>3</sup> (LOAs: -1.68 to +1.43 cm<sup>3</sup>), and ICC was 0.81. Lesion-level classification showed 99.5% sensitivity, 65.7% specificity, 53.8% positive predictive value (PPV), 99.7% negative predictive value (NPV), and 75.4% accuracy. Patient-level classification had 100% sensitivity, 27.1% specificity, 59.2% PPV, 100% NPV, and 64.5% accuracy. The model demonstrates strong lesion detection and segmentation performance, particularly for sub-centimetre lesions. Although classification accuracy was moderate, the 100% NPV suggests strong potential as a CRC staging screening tool. Future studies will assess its impact on radiologist performance and efficiency.

Virtual monochromatic image-based automatic segmentation strategy using deep learning method.

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.

Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

An explainable adaptive channel weighting-based deep convolutional neural network for classifying renal disorders in computed tomography images.

Loganathan G, Palanivelan M

pubmed logopapersJun 1 2025
Renal disorders are a significant public health concern and a cause of mortality related to renal failure. Manual diagnosis is subjective, labor-intensive, and depends on the expertise of nephrologists in renal anatomy. To improve workflow efficiency and enhance diagnosis accuracy, we propose an automated deep learning model, called EACWNet, which incorporates adaptive channel weighting-based deep convolutional neural network and explainable artificial intelligence. The proposed model categorizes renal computed tomography images into various classes, such as cyst, normal, tumor, and stone. The adaptive channel weighting module utilizes both global and local contextual insights to refine the final feature map channel weights through the integration of a scale-adaptive channel attention module in the higher convolutional blocks of the VGG-19 backbone model employed in the proposed method. The efficacy of the EACWNet model has been assessed using a publicly available renal CT images dataset, attaining an accuracy of 98.87% and demonstrating a 1.75% improvement over the backbone model. However, this model exhibits class-wise precision variation, achieving higher precision for cyst, normal, and tumor cases but lower precision for the stone class due to its inherent variability and heterogeneity. Furthermore, the model predictions have been subjected to additional analysis using the explainable artificial intelligence method such as local interpretable model-agnostic explanations, to visualize better and understand the model predictions.

DeepValve: The first automatic detection pipeline for the mitral valve in Cardiac Magnetic Resonance imaging.

Monopoli G, Haas D, Singh A, Aabel EW, Ribe M, Castrini AI, Hasselberg NE, Bugge C, Five C, Haugaa K, Forsch N, Thambawita V, Balaban G, Maleckar MM

pubmed logopapersJun 1 2025
Mitral valve (MV) assessment is key to diagnosing valvular disease and to addressing its serious downstream complications. Cardiac magnetic resonance (CMR) has become an essential diagnostic tool in MV disease, offering detailed views of the valve structure and function, and overcoming the limitations of other imaging modalities. Automated detection of the MV leaflets in CMR could enable rapid and precise assessments that enhance diagnostic accuracy. To address this gap, we introduce DeepValve, the first deep learning (DL) pipeline for MV detection using CMR. Within DeepValve, we tested three valve detection models: a keypoint-regression model (UNET-REG), a segmentation model (UNET-SEG) and a hybrid model based on keypoint detection (DSNT-REG). We also propose metrics for evaluating the quality of MV detection, including Procrustes-based metrics (UNET-REG, DSNT-REG) and customized Dice-based metrics (UNET-SEG). We developed and tested our models on a clinical dataset comprising 120 CMR images from patients with confirmed MV disease (mitral valve prolapse and mitral annular disjunction). Our results show that DSNT-REG delivered the best regression performance, accurately locating landmark locations. UNET-SEG achieved satisfactory Dice and customized Dice scores, also accurately predicting valve location and topology. Overall, our work represents a critical first step towards automated MV assessment using DL in CMR and paving the way for improved clinical assessment in MV disease.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

Assessing the diagnostic accuracy and prognostic utility of artificial intelligence detection and grading of coronary artery calcification on nongated computed tomography (CT) thorax.

Shear B, Graby J, Murphy D, Strong K, Khavandi A, Burnett TA, Charters PFP, Rodrigues JCL

pubmed logopapersJun 1 2025
This study assessed the diagnostic accuracy and prognostic implications of an artificial intelligence (AI) tool for coronary artery calcification (CAC) assessment on nongated, noncontrast thoracic computed tomography (CT). A single-centre retrospective analysis of 75 consecutive patients per age group (<40, 40-49, 50-59, 60-69, 70-79, 80-89, and ≥90 years) undergoing non-gated, non-contrast CT (January-December 2015) was conducted. AI analysis reported CAC presence and generated an Agatston score, and the performance was compared with baseline CT reports and a dedicated radiologist re-review. Interobserver variability between AI and radiologist assessments was measured using Cohen's κ. All-cause mortality was recorded, and its association with AI-detected CAC was tested. A total of 291 patients (mean age: 64 ± 19, 51% female) were included, with 80% (234/291) of AI reports passing radiologist quality assessment. CAC was reported on 7% (17/234) of initial clinical reports, 58% (135/234) on radiologist re-review, and 57% (134/234) by AI analysis. After manual quality assurance (QA) assessment, the AI tool demonstrated high sensitivity (96%), specificity (96%), positive predictive value (95%), and negative predictive value (97%) for CAC detection compared with radiologist re-review. Interobserver agreement was strong for CAC prevalence (κ = 0.92) and moderate for severity grading (κ = 0.60). AI-detected CAC presence and severity predicted all-cause mortality (p < 0.001). The AI tool exhibited feasible analysis potential for non-contrast, non-gated thoracic CTs, offering prognostic insights if integrated into routine practice. Nonetheless, manual quality assessment remains essential. This AI tool represents a potential enhancement to CAC detection and reporting on routine noncardiac chest CT.

A rule-based method to automatically locate lumbar vertebral bodies on MRI images.

Xiberta P, Vila M, Ruiz M, Julià I Juanola A, Puig J, Vilanova JC, Boada I

pubmed logopapersJun 1 2025
Segmentation is a critical process in medical image interpretation. It is also essential for preparing training datasets for machine learning (ML)-based solutions. Despite technological advancements, achieving fully automatic segmentation is still challenging. User interaction is required to initiate the process, either by defining points or regions of interest, or by verifying and refining the output. One of the complex structures that requires semi-automatic segmentation procedures or manually defined training datasets is the lumbar spine. Automating the placement of a point within each lumbar vertebral body could significantly reduce user interaction in these procedures. A new method for automatically locating lumbar vertebral bodies in sagittal magnetic resonance images (MRI) is presented. The method integrates different image processing techniques and relies on the vertebral body morphology. Testing was mainly performed using 50 MRI scans that were previously annotated manually by placing a point at the centre of each lumbar vertebral body. A complementary public dataset was also used to assess robustness. Evaluation metrics included the correct labelling of each structure, the inclusion of each point within the corresponding vertebral body area, and the accuracy of the locations relative to the vertebral body centres using root mean squared error (RMSE) and mean absolute error (MAE). A one-sample Student's t-test was also performed to find the distance beyond which differences are considered significant (α = 0.05). All lumbar vertebral bodies from the primary dataset were correctly labelled, and the average RMSE and MAE between the automatic and manual locations were less than 5 mm. Distances to the vertebral body centres were found to be significantly less than 4.33 mm with a p-value < 0.05, and significantly less than half the average minimum diameter of a lumbar vertebral body with a p-value < 0.00001. Results from the complementary public dataset include high labelling and inclusion rates (85.1% and 94.3%, respectively), and similar accuracy values. The proposed method successfully achieves robust and accurate automatic placement of points within each lumbar vertebral body. The automation of this process enables the transition from semi-automatic to fully automatic methods, thus reducing error-prone and time-consuming user interaction, and facilitating the creation of training datasets for ML-based solutions.
Page 42 of 1601600 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.