Sort by:
Page 644 of 7647636 results

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

Loganathan G, Palanivelan M

pubmed logopapersJun 1 2025
Renal disorders are a significant public health concern and a cause of mortality related to renal failure. Manual diagnosis is subjective, labor-intensive, and depends on the expertise of nephrologists in renal anatomy. To improve workflow efficiency and enhance diagnosis accuracy, we propose an automated deep learning model, called EACWNet, which incorporates adaptive channel weighting-based deep convolutional neural network and explainable artificial intelligence. The proposed model categorizes renal computed tomography images into various classes, such as cyst, normal, tumor, and stone. The adaptive channel weighting module utilizes both global and local contextual insights to refine the final feature map channel weights through the integration of a scale-adaptive channel attention module in the higher convolutional blocks of the VGG-19 backbone model employed in the proposed method. The efficacy of the EACWNet model has been assessed using a publicly available renal CT images dataset, attaining an accuracy of 98.87% and demonstrating a 1.75% improvement over the backbone model. However, this model exhibits class-wise precision variation, achieving higher precision for cyst, normal, and tumor cases but lower precision for the stone class due to its inherent variability and heterogeneity. Furthermore, the model predictions have been subjected to additional analysis using the explainable artificial intelligence method such as local interpretable model-agnostic explanations, to visualize better and understand the model predictions.

Monopoli G, Haas D, Singh A, Aabel EW, Ribe M, Castrini AI, Hasselberg NE, Bugge C, Five C, Haugaa K, Forsch N, Thambawita V, Balaban G, Maleckar MM

pubmed logopapersJun 1 2025
Mitral valve (MV) assessment is key to diagnosing valvular disease and to addressing its serious downstream complications. Cardiac magnetic resonance (CMR) has become an essential diagnostic tool in MV disease, offering detailed views of the valve structure and function, and overcoming the limitations of other imaging modalities. Automated detection of the MV leaflets in CMR could enable rapid and precise assessments that enhance diagnostic accuracy. To address this gap, we introduce DeepValve, the first deep learning (DL) pipeline for MV detection using CMR. Within DeepValve, we tested three valve detection models: a keypoint-regression model (UNET-REG), a segmentation model (UNET-SEG) and a hybrid model based on keypoint detection (DSNT-REG). We also propose metrics for evaluating the quality of MV detection, including Procrustes-based metrics (UNET-REG, DSNT-REG) and customized Dice-based metrics (UNET-SEG). We developed and tested our models on a clinical dataset comprising 120 CMR images from patients with confirmed MV disease (mitral valve prolapse and mitral annular disjunction). Our results show that DSNT-REG delivered the best regression performance, accurately locating landmark locations. UNET-SEG achieved satisfactory Dice and customized Dice scores, also accurately predicting valve location and topology. Overall, our work represents a critical first step towards automated MV assessment using DL in CMR and paving the way for improved clinical assessment in MV disease.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

Shear B, Graby J, Murphy D, Strong K, Khavandi A, Burnett TA, Charters PFP, Rodrigues JCL

pubmed logopapersJun 1 2025
This study assessed the diagnostic accuracy and prognostic implications of an artificial intelligence (AI) tool for coronary artery calcification (CAC) assessment on nongated, noncontrast thoracic computed tomography (CT). A single-centre retrospective analysis of 75 consecutive patients per age group (<40, 40-49, 50-59, 60-69, 70-79, 80-89, and ≥90 years) undergoing non-gated, non-contrast CT (January-December 2015) was conducted. AI analysis reported CAC presence and generated an Agatston score, and the performance was compared with baseline CT reports and a dedicated radiologist re-review. Interobserver variability between AI and radiologist assessments was measured using Cohen's κ. All-cause mortality was recorded, and its association with AI-detected CAC was tested. A total of 291 patients (mean age: 64 ± 19, 51% female) were included, with 80% (234/291) of AI reports passing radiologist quality assessment. CAC was reported on 7% (17/234) of initial clinical reports, 58% (135/234) on radiologist re-review, and 57% (134/234) by AI analysis. After manual quality assurance (QA) assessment, the AI tool demonstrated high sensitivity (96%), specificity (96%), positive predictive value (95%), and negative predictive value (97%) for CAC detection compared with radiologist re-review. Interobserver agreement was strong for CAC prevalence (κ = 0.92) and moderate for severity grading (κ = 0.60). AI-detected CAC presence and severity predicted all-cause mortality (p < 0.001). The AI tool exhibited feasible analysis potential for non-contrast, non-gated thoracic CTs, offering prognostic insights if integrated into routine practice. Nonetheless, manual quality assessment remains essential. This AI tool represents a potential enhancement to CAC detection and reporting on routine noncardiac chest CT.

Xiberta P, Vila M, Ruiz M, Julià I Juanola A, Puig J, Vilanova JC, Boada I

pubmed logopapersJun 1 2025
Segmentation is a critical process in medical image interpretation. It is also essential for preparing training datasets for machine learning (ML)-based solutions. Despite technological advancements, achieving fully automatic segmentation is still challenging. User interaction is required to initiate the process, either by defining points or regions of interest, or by verifying and refining the output. One of the complex structures that requires semi-automatic segmentation procedures or manually defined training datasets is the lumbar spine. Automating the placement of a point within each lumbar vertebral body could significantly reduce user interaction in these procedures. A new method for automatically locating lumbar vertebral bodies in sagittal magnetic resonance images (MRI) is presented. The method integrates different image processing techniques and relies on the vertebral body morphology. Testing was mainly performed using 50 MRI scans that were previously annotated manually by placing a point at the centre of each lumbar vertebral body. A complementary public dataset was also used to assess robustness. Evaluation metrics included the correct labelling of each structure, the inclusion of each point within the corresponding vertebral body area, and the accuracy of the locations relative to the vertebral body centres using root mean squared error (RMSE) and mean absolute error (MAE). A one-sample Student's t-test was also performed to find the distance beyond which differences are considered significant (α = 0.05). All lumbar vertebral bodies from the primary dataset were correctly labelled, and the average RMSE and MAE between the automatic and manual locations were less than 5 mm. Distances to the vertebral body centres were found to be significantly less than 4.33 mm with a p-value < 0.05, and significantly less than half the average minimum diameter of a lumbar vertebral body with a p-value < 0.00001. Results from the complementary public dataset include high labelling and inclusion rates (85.1% and 94.3%, respectively), and similar accuracy values. The proposed method successfully achieves robust and accurate automatic placement of points within each lumbar vertebral body. The automation of this process enables the transition from semi-automatic to fully automatic methods, thus reducing error-prone and time-consuming user interaction, and facilitating the creation of training datasets for ML-based solutions.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Wang J, Yang R, Miao Y, Zhang X, Paillard-Borg S, Fang Z, Xu W

pubmed logopapersJun 1 2025
Metabolic dysfunction-associated steatotic liver disease (MASLD) is linked to cognitive decline and dementia risk. We aimed to investigate the association between MASLD and brain ageing and explore the role of low-grade inflammation. Within the UK Biobank, 30 386 chronic neurological disorders-free participants who underwent brain magnetic resonance imaging (MRI) scans were included. Individuals were categorised into no MASLD/related SLD and MASLD/related SLD (including subtypes of MASLD, MASLD with increased alcohol intake [MetALD] and MASLD with other combined aetiology). Brain age was estimated using machine learning by 1079 brain MRI phenotypes. Brain age gap (BAG) was calculated as the difference between brain age and chronological age. Low-grade inflammation (INFLA) was calculated based on white blood cell count, platelet, neutrophil granulocyte to lymphocyte ratio and C-reactive protein. Data were analysed using linear regression and structural equation models. At baseline, 7360 (24.2%) participants had MASLD/related SLD. Compared to participants with no MASLD/related SLD, those with MASLD/related SLD had significantly larger BAG (β = 0.86, 95% CI = 0.70, 1.02), as well as those with MASLD (β = 0.59, 95% CI = 0.41, 0.77) or MetALD (β = 1.57, 95% CI = 1.31, 1.83). The association between MASLD/related SLD and larger BAG was significant across middle-aged (< 60) and older (≥ 60) adults, males and females, and APOE ɛ4 carriers and non-carriers. INFLA mediated 13.53% of the association between MASLD/related SLD and larger BAG (p < 0.001). MASLD/related SLD, as well as MASLD and MetALD, is associated with accelerated brain ageing, even among middle-aged adults and APOE ɛ4 non-carriers. Low-grade systemic inflammation may partially mediate this association.
Page 644 of 7647636 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.