Sort by:
Page 21 of 1431427 results

MoNetV2: Enhanced Motion Network for Freehand 3-D Ultrasound Reconstruction.

Luo M, Yang X, Yan Z, Cao Y, Zhang Y, Hu X, Wang J, Ding H, Han W, Sun L, Ni D

pubmed logopapersJun 11 2025
Three-dimensional ultrasound (US) aims to provide sonographers with the spatial relationships of anatomical structures, playing a crucial role in clinical diagnosis. Recently, deep-learning-based freehand 3-D US has made significant advancements. It reconstructs volumes by estimating transformations between images without external tracking. However, image-only reconstruction poses difficulties in reducing cumulative drift and further improving reconstruction accuracy, particularly in scenarios involving complex motion trajectories. In this context, we propose an enhanced motion network (MoNetV2) to enhance the accuracy and generalizability of reconstruction under diverse scanning velocities and tactics. First, we propose a sensor-based temporal and multibranch structure (TMS) that fuses image and motion information from a velocity perspective to improve image-only reconstruction accuracy. Second, we devise an online multilevel consistency constraint (MCC) that exploits the inherent consistency of scans to handle various scanning velocities and tactics. This constraint exploits scan-level velocity consistency (SVC), path-level appearance consistency (PAC), and patch-level motion consistency (PMC) to supervise interframe transformation estimation. Third, we distill an online multimodal self-supervised strategy (MSS) that leverages the correlation between network estimation and motion information to further reduce cumulative errors. Extensive experiments clearly demonstrate that MoNetV2 surpasses existing methods in both reconstruction quality and generalizability performance across three large datasets.

Non-invasive prediction of nuclear grade in renal cell carcinoma using CT-Based radiomics: a systematic review and meta-analysis.

Salimi M, Hajikarimloo B, Vadipour P, Abdolizadeh A, Fayedeh F, Seifi S

pubmed logopapersJun 11 2025
Renal cell carcinoma (RCC) represents the most prevalent malignant neoplasm of the kidney, with a rising global incidence. Tumor nuclear grade is a crucial prognostic factor, guiding treatment decisions, but current histopathological grading via biopsy is invasive and prone to sampling errors. This study aims to assess the diagnostic performance and quality of CT-based radiomics for preoperatively predicting RCC nuclear grade. A comprehensive search was conducted across PubMed, Scopus, Embase, and Web of Science to identify relevant studies up until 19 April 2025. Quality was assessed using the QUADAS-2 and METRICS tools. A bivariate random-effects meta-analysis was performed to evaluate model performance, including sensitivity, specificity, and Area Under the Curve (AUC). Results from separate validation cohorts were pooled, and clinical and combined models were analyzed separately in distinct analyses. A total of 26 studies comprising 1993 individuals in 10 external and 16 internal validation cohorts were included. Meta-analysis of radiomics models showed pooled AUC of 0.88, sensitivity of 0.78, and specificity of 0.82. Clinical and combined (clinical-radiomics) models showed AUCs of 0.73 and 0.86, respectively. QUADAS-2 revealed significant risk of bias in the Index Test and Flow and Timing domains. METRICS scores ranged from 49.7 to 88.4%, with an average of 66.65%, indicating overall good quality, though gaps in some aspects of study methodologies were identified. This study suggests that radiomics models show great potential and diagnostic accuracy for non-invasive preoperative nuclear grading of RCC. However, challenges related to generalizability and clinical applicability remain, as further research with standardized methodologies, external validation, and larger cohorts is needed to enhance their reliability and integration into routine clinical practice.

Implementation of biomedical segmentation for brain tumor utilizing an adapted U-net model.

Alkhalid FF, Salih NZ

pubmed logopapersJun 11 2025
Using radio signals from a magnetic field, magnetic resonance imaging (MRI) represents a medical procedure that produces images to provide more information than typical scans. Diagnosing brain tumors from MRI is difficult because of the wide range of tumor shapes, areas, and visual features, thus universal and automated system to handle this task is required. Among the best deep learning methods, the U-Net architecture is the most widely used in diagnostic medical images. Therefore, U-Net-based attention is the most effective automated model in medical image segmentation dealing with various modalities. The self-attention structures that are used in the U-Net design allow for fast global preparation and better feature visualization. This research aims to study the progress of U-Net design and show how it improves the performance of brain tumor segmentation. We have investigated three U-Net designs (standard U-Net, Attention U-Net, and self-attention U-Net) for five epochs to find the last segmentation. An MRI image dataset that includes 3064 images from the Kaggle website is used to give a more comprehensive overview. Also, we offer a comparison with several studies that are based on U-Net structures to illustrate the evolution of this network from an accuracy standpoint. U-Net-based self-attention has demonstrated superior performance compared to other studies because self-attention can enhance segmentation quality, particularly for unclear structures, by concentrating on the most significant parts. Four main metrics are applied with a loss function of 5.03 %, a validation loss function of 4.82 %, a validation accuracy of 98.49 %, and an accuracy of 98.45 %.

Towards more reliable prostate cancer detection: Incorporating clinical data and uncertainty in MRI deep learning.

Taguelmimt K, Andrade-Miranda G, Harb H, Thanh TT, Dang HP, Malavaud B, Bert J

pubmed logopapersJun 11 2025
Prostate cancer (PCa) is one of the most common cancers among men, and artificial intelligence (AI) is emerging as a promising tool to enhance its diagnosis. This work proposes a classification approach for PCa cases using deep learning techniques. We conducted a comparison between unimodal models based either on biparametric magnetic resonance imaging (bpMRI) or clinical data (such as prostate-specific antigen levels, prostate volume, and age). We also introduced a bimodal model that simultaneously integrates imaging and clinical data to address the limitations of unimodal approaches. Furthermore, we propose a framework that not only detects the presence of PCa but also evaluates the uncertainty associated with the predictions. This approach makes it possible to identify highly confident predictions and distinguish them from those characterized by uncertainty, thereby enhancing the reliability and applicability of automated medical decisions in clinical practice. The results show that the bimodal model significantly improves performance, with an area under the curve (AUC) reaching 0.82±0.03, a sensitivity of 0.73±0.04, while maintaining high specificity. Uncertainty analysis revealed that the bimodal model produces more confident predictions, with an uncertainty accuracy of 0.85, surpassing the imaging-only model (which is 0.71). This increase in reliability is crucial in a clinical context, where precise and dependable diagnostic decisions are essential for patient care. The integration of clinical data with imaging data in a bimodal model not only improves diagnostic performance but also strengthens the reliability of predictions, making this approach particularly suitable for clinical use.

AI-based radiomic features predict outcomes and the added benefit of chemoimmunotherapy over chemotherapy in extensive stage small cell lung cancer: A Multi-institutional study.

Khorrami M, Mutha P, Barrera C, Viswanathan VS, Ardeshir-Larijani F, Jain P, Higgins K, Madabhushi A

pubmed logopapersJun 11 2025
Small cell lung cancer (SCLC) is aggressive with poor survival outcomes, and most patients develop resistance to chemotherapy. No predictive biomarkers currently guide therapy. This study evaluates radiomic features to predict PFS and OS in limited-stage SCLC (LS-SCLC) and assesses PFS, OS, and the added benefit of chemoimmunotherapy (CHIO) in extensive-stage SCLC (ES-SCLC). A total of 660 SCLC patients (470 ES-SCLC, 190 LS-SCLC) from three sites were analyzed. LS-SCLC patients received chemotherapy and radiation, while ES-SCLC patients received either chemotherapy alone or chemoimmunotherapy. Radiomic and quantitative vasculature tortuosity features were extracted from CT scans. A LASSO-Cox regression model was used to construct the ES- Risk-Score (ESRS) and LS- Risk-Score (LSRS). ESRS was associated with PFS in training (HR = 1.54, adj. P = .0013) and validation sets (HR = 1.32, adj. P = .0001; HR = 2.4, adj. P = .0073) and with OS in training (HR = 1.37, adj. P = .0054) and validation sets (HR = 1.35, adj. P < .0006; HR = 1.6, adj. P < .0085) in ES-SCLC patients treated with chemotherapy. High-risk patients had improved PFS (HR = 0.68, adj. P < .001) and OS (HR = 0.78, adj. P = .026) with chemoimmunotherapy. LSRS was associated with PFS in training and validation sets (HR = 1.9, adj. P = .007; HR = 1.4, adj. P = .0098; HR = 2.1, adj. P = .028) in LS-SCLC patients receiving chemoradiation. Radiomics is prognostic for PFS and OS and predicts chemoimmunotherapy benefit in high-risk ES-SCLC patients.

Automated Segmentation of Thoracic Aortic Lumen and Vessel Wall on 3D Bright- and Black-Blood MRI using nnU-Net.

Cesario M, Littlewood SJ, Nadel J, Fletcher TJ, Fotaki A, Castillo-Passi C, Hajhosseiny R, Pouliopoulos J, Jabbour A, Olivero R, Rodríguez-Palomares J, Kooi ME, Prieto C, Botnar RM

pubmed logopapersJun 11 2025
Magnetic resonance angiography (MRA) is an important tool for aortic assessment in several cardiovascular diseases. Assessment of MRA images relies on manual segmentation; a time-intensive process that is subject to operator variability. We aimed to optimize and validate two deep-learning models for automatic segmentation of the aortic lumen and vessel wall in high-resolution ECG-triggered free-breathing respiratory motion-corrected 3D bright- and black-blood MRA images. Manual segmentation, serving as the ground truth, was performed on 25 bright-blood and 15 black-blood 3D MRA image sets acquired with the iT2PrepIR-BOOST sequence (1.5T) in thoracic aortopathy patients. The training was performed with nnU-Net for bright-blood (lumen) and black-blood image sets (lumen and vessel wall). Training consisted of a 70:20:10% training: validation: testing split. Inference was run on datasets (single vendor) from different centres (UK, Spain, and Australia), sequences (iT2PrepIR-BOOST, T2 prepared CMRA, and TWIST MRA), acquired resolutions (from 0.9 mm<sup>3</sup> to 3 mm<sup>3</sup>), and field strengths (0.55T, 1.5T, and 3T). Predictive measurements comprised Dice Similarity Coefficient (DSC), and Intersection over Union (IoU). Postprocessing (3D slicer) included centreline extraction, diameter measurement, and curved planar reformatting (CPR). The optimal configuration was the 3D U-Net. Bright blood segmentation at 1.5T on iT2PrepIR-BOOST datasets (1.3 and 1.8 mm<sup>3</sup>) and 3D CMRA datasets (0.9 mm<sup>3</sup>) resulted in DSC ≥ 0.96 and IoU ≥ 0.92. For bright-blood segmentation on 3D CMRA at 0.55T, the nnUNet achieved DSC and IoU scores of 0.93 and 0.88 at 1.5 mm³, and 0.68 and 0.52 at 3.0 mm³, respectively. DSC and IoU scores of 0.89 and 0.82 were obtained for CMRA image sets (1 mm<sup>3</sup>) at 1.5T (Barcelona dataset). DSC and IoU score of the BRnnUNet model were 0.90 and 0.82 respectively for the contrast-enhanced dataset (TWIST MRA). Lumen segmentation on black blood 1.5T iT2PrepIR-BOOST image sets achieved DSC ≥ 0.95 and IoU ≥ 0.90, and vessel wall segmentation resulted in DSC ≥ 0.80 and IoU ≥ 0.67. Automated centreline tracking, diameter measurement and CPR were successfully implemented in all subjects. Automated aortic lumen and wall segmentation on 3D bright- and black-blood image sets demonstrated excellent agreement with ground truth. This technique demonstrates a fast and comprehensive assessment of aortic morphology with great potential for future clinical application in various cardiovascular diseases.

RCMIX model based on pre-treatment MRI imaging predicts T-downstage in MRI-cT4 stage rectal cancer.

Bai F, Liao L, Tang Y, Wu Y, Wang Z, Zhao H, Huang J, Wang X, Ding P, Wu X, Cai Z

pubmed logopapersJun 11 2025
Neoadjuvant therapy (NAT) is the standard treatment strategy for MRI-defined cT4 rectal cancer. Predicting tumor regression can guide the resection plane to some extent. Here, we covered pre-treatment MRI imaging of 363 cT4 rectal cancer patients receiving NAT and radical surgery from three hospitals: Center 1 (n = 205), Center 2 (n = 109) and Center 3 (n = 52). We propose a machine learning model named RCMIX, which incorporates a multilayer perceptron algorithm based on 19 pre-treatment MRI radiomic features and 2 clinical features in cT4 rectal cancer patients receiving NAT. The model was trained on 205 cases of cT4 rectal cancer patients, achieving an AUC of 0.903 (95% confidence interval, 0.861-0.944) in predicting T-downstage. It also achieved AUC of 0.787 (0.699-0.874) and 0.773 (0.646-0.901) in two independent test cohorts, respectively. cT4 rectal cancer patients who were predicted as Well T-downstage by the RCMIX model had significantly better disease-free survival than those predicted as Poor T-downstage. Our study suggests that the RCMIX model demonstrates satisfactory performance in predicting T-downstage by NAT for cT4 rectal cancer patients, which may provide critical insights to improve surgical strategies.

Evaluation of Semi-Automated versus Fully Automated Technologies for Computed Tomography Scalable Body Composition Analyses in Patients with Severe Acute Respiratory Syndrome Coronavirus-2.

Wozniak A, O'Connor P, Seigal J, Vasilopoulos V, Beg MF, Popuri K, Joyce C, Sheean P

pubmed logopapersJun 11 2025
Fully automated, artificial intelligence (AI) -based software has recently become available for scalable body composition analysis. Prior to broad application in the clinical arena, validation studies are needed. Our goal was to compare the results of a fully automated, AI-based software with a semi-automatic software in a sample of hospitalized patients. A diverse group of patients with Coronovirus-2 (COVID-19) and evaluable computed tomography (CT) images were included in this retrospective cohort. Our goal was to compare multiple aspects of body composition procuring results from fully automated and semi-automated body composition software. Bland-Altman analyses and correlation coefficients were used to calculate average bias and trend of bias for skeletal muscle (SM), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), intermuscular adipose tissue (IMAT), and total adipose tissue (TAT-the sum of SAT, VAT, and IMAT). A total of 141 patients (average (standard deviation (SD)) age of 58.2 (18.9), 61% male, and 31% White Non-Hispanic, 31% Black Non-Hispanic, and 33% Hispanic) contributed to the analysis. Average bias (mean ± SD) was small (in comparison to the SD) and negative for SM (-3.79 cm<sup>2</sup> ± 7.56 cm<sup>2</sup>) and SAT (-7.06 cm<sup>2</sup> ± 19.77 cm<sup>2</sup>), and small and positive for VAT (2.29 cm<sup>2</sup> ± 15.54 cm<sup>2</sup>). A large negative bias was observed for IMAT (-7.77 cm<sup>2</sup> ± 5.09 cm<sup>2</sup>), where fully automated software underestimated intramuscular tissue quantity relative to the semi-automated software. The discrepancy in IMAT calculation was not uniform across its range given a correlation coefficient of -0.625; as average IMAT increased, the bias (underestimation by fully automated software) was greater. When compared to a semi-automated software, a fully automated, AI-based software provides consistent findings for key CT body composition measures (SM, SAT, VAT, TAT). While our findings support good overall agreement as evidenced by small biases and limited outliers, additional studies are needed in other clinical populations to further support validity and advanced precision, especially in the context of body composition and malnutrition assessment.

Patient perspectives on AI in radiology: Insights from the United Arab Emirates.

El-Sayed MZ, Rawashdeh M, Moossa A, Atfah M, Prajna B, Ali MA

pubmed logopapersJun 11 2025
Artificial intelligence (AI) enhances diagnostic accuracy, efficiency, and patient outcomes in radiology. Patient acceptance is essential for successful integration. This study examines patient perspectives on AI in radiology within the UAE, focusing on their knowledge, attitudes, and perceived barriers. Understanding these factors can address concerns, improve trust, and guide patient-centered AI implementation. The findings aim to support effective AI adoption in healthcare. A cross-sectional study involving 205 participants undergoing radiological imaging in the UAE. Data was collected through an online questionnaire, developed based on a literature review, and pre-tested for reliability and validity. Non-probability sampling methods, including convenience and snowball sampling, were employed. The questionnaire assessed participants' knowledge, attitudes, and perceived barriers regarding AI in radiology. Data was analyzed, and categorical variables were expressed as frequencies and percentages. Most participants (89.8 %) believed AI could improve diagnostic accuracy, and 87.8 % acknowledged its role in prioritizing urgent cases. However, only 22 % had direct experience with AI in radiology. While 81 % expressed comfort with AI-based technology, concerns about data security (80.5 %), lack of empathy in AI systems (82.9 %), and insufficient information about AI (85.8 %) were significant barriers. Additionally, (87.3 %) of participants were concerned about the cost of AI implementation. Despite these concerns, 86.3 % believed AI could improve the quality of radiological services, and 83.9 % were satisfied with its potential applications. UAE patients generally support AI in radiology, recognizing its potential for improved diagnostic accuracy. However, concerns about data security, empathy, and understanding of AI technologies necessitate improved patient education, transparent communication, and regulatory frameworks to foster trust and acceptance.

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning.

Ding Z, Morris S, Hu S, Su TY, Choi JY, Blümcke I, Wang X, Sakaie K, Murakami H, Alexopoulos AV, Jones SE, Najm IM, Ma D, Wang ZI

pubmed logopapersJun 10 2025
Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection. We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 <i>z</i>-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness. We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap. The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.
Page 21 of 1431427 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.