Sort by:
Page 6 of 1261257 results

Arthroscopy-validated diagnostic performance of sub-5-min deep learning super-resolution 3T knee MRI in children and adolescents.

Vosshenrich J, Breit HC, Donners R, Obmann MM, Harder D, Ahlawat S, Walter SS, Serfaty A, Cantarelli Rodrigues T, Recht M, Stern SE, Fritz J

pubmed logopapersJun 10 2025
This study aims to determine the diagnostic performance of sub-5-min combined sixfold parallel imaging (PIx3)-simultaneous multislice (SMSx2)-accelerated deep learning (DL) super-resolution 3T knee MRI in children and adolescents. Children with painful knee conditions who underwent PIx3-SMSx2-accelerated DL super-resolution 3T knee MRI and arthroscopy between October 2022 and December 2023 were retrospectively included. Nine fellowship-trained musculoskeletal radiologists independently scored the MRI studies for image quality and the presence of artifacts (Likert scales, range: 1 = very bad/severe, 5 = very good/absent), as well as structural abnormalities. Interreader agreements and diagnostic performance testing was performed. Forty-four children (mean age: 15 ± 2 years; range: 9-17 years; 24 boys) who underwent knee MRI and arthroscopic surgery within 22 days (range, 2-133) were evaluated. Overall image quality was very good (median rating: 5 [IQR: 4-5]). Motion artifacts (5 [5-5]) and image noise (5 [4-5]) were absent. Arthroscopy-verified abnormalities were detected with good or better interreader agreement (κ ≥ 0.74). Sensitivity, specificity, accuracy, and AUC values were 100%, 84%, 93%, and 0.92, respectively, for anterior cruciate ligament tears; 71%, 97%, 93%, and 0.84 for medial meniscus tears; 65%, 100%, 86%, and 0.82 for lateral meniscus tears; 100%, 100%, 100%, and 1.00 for discoid lateral menisci; 100%, 95%, 96%, and 0.98 for medial patellofemoral ligament tears; and 55%, 100%, 98%, and 0.77 for articular cartilage defects. Clinical sub-5-min PIx3-SMSx2-accelerated DL super-resolution 3T knee MRI provides excellent image quality and high diagnostic performance for diagnosing internal derangement in children and adolescents.

Evaluation of artificial-intelligence-based liver segmentation and its application for longitudinal liver volume measurement.

Kimura R, Hirata K, Tsuneta S, Takenaka J, Watanabe S, Abo D, Kudo K

pubmed logopapersJun 10 2025
Accurate liver-volume measurements from CT scans are essential for treatment planning, particularly in liver resection cases, to avoid postoperative liver failure. However, manual segmentation is time-consuming and prone to variability. Advancements in artificial intelligence (AI), specifically convolutional neural networks, have enhanced liver segmentation accuracy. We aimed to identify optimal CT phases for AI-based liver volume estimation and apply the model to track liver volume changes over time. We also evaluated temporal changes in liver volume in participants without liver disease. In this retrospective, single-center study, we assessed the performance of an open-source AI-based liver segmentation model previously reported, using non-contrast and dynamic CT phases. The accuracy of the model was compared with that of expert radiologists. The Dice similarity coefficient (DSC) was calculated across various CT phases, including arterial, portal venous, and non-contrast, to validate the model. The model was then applied to a longitudinal study involving 39 patients without liver disease (527 CT scans) to examine age-related liver volume changes over 5 to 20 years. The model demonstrated high accuracy across all phases compared to manual segmentation. Among the CT phases, the highest DSC of 0.988 ± 0.010 was in the arterial phase. The intraclass correlation coefficients for liver volume were also high, exceeding 0.9 for contrast-enhanced phases and 0.8 for non-contrast CT. In the longitudinal study, the model indicated an annual decrease of 0.95%. This model provides high accuracy in liver segmentation across various CT phases and offers insights into age-related liver volume reduction. Measuring changes in liver volume may help with the early detection of diseases and the understanding of pathophysiology.

Uncovering Image-Driven Subtypes with Distinct Pathology and Clinical Course in Autopsy-Confirmed Four Repeat Tauopathies.

Satoh R, Sekiya H, Ali F, Clark HM, Utianski RL, Duffy JR, Machulda MM, Dickson DW, Josephs KA, Whitwell JL

pubmed logopapersJun 10 2025
The four-repeat (4R) tauopathies are a group of neurodegenerative diseases, including progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), and globular glial tauopathy (GGT). This study aimed to characterize spatiotemporal atrophy progression using structural magnetic resonance imaging (MRI) and to examine its relationship with clinical course and neuropathology in a cohort of autopsy-confirmed 4R tauopathies. The study included 85 autopsied patients (54 with PSP, 28 with CBD, and 3 with GGT) who underwent multiple 3T MRI scans, as well as neuropsychological, neurological, and speech/language examinations, and standardized postmortem neuropathological evaluations. An unsupervised machine-learning algorithm, Subtype and Stage Inference (SuStaIn), was applied to the cross-sectional brain volumes to estimate spatiotemporal atrophy patterns and data-driven subtypes and stages in each patient. The relationships among estimated subtypes, pathological diagnoses, and longitudinal changes in clinical testing were examined. The SuStaIn algorithm identified 2 distinct subtypes: (1) the subcortical subtype, in which atrophy progresses from the midbrain to the cortex, and (2) the cortical subtype, in which atrophy progresses from the frontal cortex to the subcortical regions. The subcortical subtype was more associated with typical PSP, whereas the cortical subtype was more associated with atypical PSP with a cortical distribution of pathology and CBD (p < 0.001). The cortical subtype had a faster rate of change on the PSP Rating Scale than the subcortical subtype (p < 0.05). SuStaIn analysis revealed 2 MRI-driven subtypes with distinct spatiotemporal atrophy patterns, clinical courses, and neuropathology. Our findings contribute to a comprehensive and improved understanding of disease progression and its relationship to tau pathology in 4R tauopathies. ANN NEUROL 2025.

Multivariate brain morphological patterns across mood disorders: key roles of frontotemporal and cerebellar areas.

Kandilarova S, Maggioni E, Squarcina L, Najar D, Homadi M, Tassi E, Stoyanov D, Brambilla P

pubmed logopapersJun 10 2025
Differentiating major depressive disorder (MDD) from bipolar disorder (BD) remains a significant clinical challenge, as both disorders exhibit overlapping symptoms but require distinct treatment approaches. Advances in voxel-based morphometry and surface-based morphometry have facilitated the identification of structural brain abnormalities that may serve as diagnostic biomarkers. This study aimed to explore the relationships between brain morphological features, such as grey matter volume (GMV) and cortical thickness (CT), and demographic and clinical variables in patients with MDD and BD and healthy controls (HC) using multivariate analysis methods. A total of 263 participants, including 120 HC, 95 patients with MDD and 48 patients with BD, underwent T1-weighted MRI. GMV and CT were computed for standardised brain regions, followed by multivariate partial least squares (PLS) regression to assess associations with demographic and diagnostic variables. Reductions in frontotemporal CT were observed in MDD and BD compared with HC, but distinct trends between BD and MDD were also detected for the CT of selective temporal, frontal and parietal regions. Differential patterns in cerebellar GMV were also identified, with lobule CI larger in MDD and lobule CII larger in BD. Additionally, BD showed the same trend as ageing concerning reductions in CT and posterior cerebellar and striatal GMV. Depression severity showed a transdiagnostic link with reduced frontotemporal CT. This study highlights shared and distinct structural brain alterations in MDD and BD, emphasising the potential of neuroimaging biomarkers to enhance diagnostic accuracy. Accelerated cortical thinning and differential cerebellar changes in BD may serve as targets for future research and clinical interventions. Our findings underscore the value of objective neuroimaging markers in increasing the precision of mood disorder diagnoses, improving treatment outcomes.

Preoperative prediction model for benign and malignant gallbladder polyps on the basis of machine-learning algorithms.

Zeng J, Hu W, Wang Y, Jiang Y, Peng J, Li J, Liu X, Zhang X, Tan B, Zhao D, Li K, Zhang S, Cao J, Qu C

pubmed logopapersJun 10 2025
This study aimed to differentiate between benign and malignant gallbladder polyps preoperatively by developing a prediction model integrating preoperative transabdominal ultrasound and clinical features using machine-learning algorithms. A retrospective analysis was conducted on clinical and ultrasound data from 1,050 patients at 2 centers who underwent cholecystectomy for gallbladder polyps. Six machine-learning algorithms were used to develop preoperative models for predicting benign and malignant gallbladder polyps. Internal and external test cohorts evaluated model performance. The Shapley Additive Explanations algorithm was used to understand feature importance. The main study cohort included 660 patients with benign polyps and 285 patients with malignant polyps, randomly divided into a 3:1 stratified training and internal test cohorts. The external test cohorts consisted of 73 benign and 32 malignant polyps. In the training cohort, the Shapley Additive Explanations algorithm, on the basis of variables selected by Least Absolute Shrinkage and Selection Operator regression and multivariate logistic regression, further identified 6 key predictive factors: polyp size, age, fibrinogen, carbohydrate antigen 19-9, presence of stones, and cholinesterase. Using these factors, 6 predictive models were developed. The random forest model outperformed others, with an area under the curve of 0.963, 0.940, and 0.958 in the training, internal, and external test cohorts, respectively. Compared with previous studies, the random forest model demonstrated excellent clinical utility and predictive performance. In addition, the Shapley Additive Explanations algorithm was used to visualize feature importance, and an online calculation platform was developed. The random forest model, combining preoperative ultrasound and clinical features, accurately predicts benign and malignant gallbladder polyps, offering valuable guidance for clinical decision-making.

U<sub>2</sub>-Attention-Net: a deep learning automatic delineation model for parotid glands in head and neck cancer organs at risk on radiotherapy localization computed tomography images.

Wen X, Wang Y, Zhang D, Xiu Y, Sun L, Zhao B, Liu T, Zhang X, Fan J, Xu J, An T, Li W, Yang Y, Xing D

pubmed logopapersJun 10 2025
This study aimed to develop a novel deep learning model, U<sub>2</sub>-Attention-Net (U<sub>2</sub>A-Net), for precise segmentation of parotid glands on radiotherapy localization CT images. CT images from 79 patients with head and neck cancer were selected, on which the label maps were delineated by relevant practitioners to construct a dataset. The dataset was divided into the training set (n = 60), validation set (n = 6), and test set (n = 13), with the training set augmented. U<sub>2</sub>A-Net, divided into U<sub>2</sub>A-Net V<sub>1</sub> (sSE) and U<sub>2</sub>A-Net V<sub>2</sub> (cSE) based on different attention mechanisms, was evaluated for parotid gland segmentation based on the DL loss function with U-Net, Attention U-Net, DeepLabV3+, and TransUNet as comparision models. Segmentation was also performed using GDL and GD-BCEL loss functions. Model performance was evaluated using DSC, JSC, PPV, SE, HD, RVD, and VOE metrics. The quantitative results revealed that U<sub>2</sub>A-Net based on DL outperformed the comparative models. While U<sub>2</sub>A-Net V<sub>1</sub> had the highest PPV, U<sub>2</sub>A-Net V<sub>2</sub> demonstrated the best quantitative results in other metrics. Qualitative results showed that U<sub>2</sub>A-Net's segmentation closely matched expert delineations, reducing oversegmentation and undersegmentation, with U<sub>2</sub>A-Net V<sub>2</sub> being more effective. In comparing loss functions, U<sub>2</sub>A-Net V<sub>1</sub> using GD-BCEL and U<sub>2</sub>A-Net V<sub>2</sub> using DL performed best. The U<sub>2</sub>A-Net model significantly improved parotid gland segmentation on radiotherapy localization CT images. The cSE attention mechanism showed advantages with DL, while sSE performed better with GD-BCEL.

Empirical evaluation of artificial intelligence distillation techniques for ascertaining cancer outcomes from electronic health records.

Riaz IB, Naqvi SAA, Ashraf N, Harris GJ, Kehl KL

pubmed logopapersJun 10 2025
Phenotypic information for cancer research is embedded in unstructured electronic health records (EHR), requiring effort to extract. Deep learning models can automate this but face scalability issues due to privacy concerns. We evaluated techniques for applying a teacher-student framework to extract longitudinal clinical outcomes from EHRs. We focused on the challenging task of ascertaining two cancer outcomes-overall response and progression according to Response Evaluation Criteria in Solid Tumors (RECIST)-from free-text radiology reports. Teacher models with hierarchical Transformer architecture were trained on data from Dana-Farber Cancer Institute (DFCI). These models labeled public datasets (MIMIC-IV, Wiki-text) and GPT-4-generated synthetic data. "Student" models were then trained to mimic the teachers' predictions. DFCI "teacher" models achieved high performance, and student models trained on MIMIC-IV data showed comparable results, demonstrating effective knowledge transfer. However, student models trained on Wiki-text and synthetic data performed worse, emphasizing the need for in-domain public datasets for model distillation.

Uncertainty estimation for trust attribution to speed-of-sound reconstruction with variational networks.

Laguna S, Zhang L, Bezek CD, Farkas M, Schweizer D, Kubik-Huch RA, Goksel O

pubmed logopapersJun 10 2025
Speed-of-sound (SoS) is a biomechanical characteristic of tissue, and its imaging can provide a promising biomarker for diagnosis. Reconstructing SoS images from ultrasound acquisitions can be cast as a limited-angle computed-tomography problem, with variational networks being a promising model-based deep learning solution. Some acquired data frames may, however, get corrupted by noise due to, e.g., motion, lack of contact, and acoustic shadows, which in turn negatively affects the resulting SoS reconstructions. We propose to use the uncertainty in SoS reconstructions to attribute trust to each individual acquired frame. Given multiple acquisitions, we then use an uncertainty-based automatic selection among these retrospectively, to improve diagnostic decisions. We investigate uncertainty estimation based on Monte Carlo Dropout and Bayesian Variational Inference. We assess our automatic frame selection method for differential diagnosis of breast cancer, distinguishing between benign fibroadenoma and malignant carcinoma. We evaluate 21 lesions classified as BI-RADS 4, which represents suspicious cases for probable malignancy. The most trustworthy frame among four acquisitions of each lesion was identified using uncertainty-based criteria. Selecting a frame informed by uncertainty achieved an area under curve of 76% and 80% for Monte Carlo Dropout and Bayesian Variational Inference, respectively, superior to any uncertainty-uninformed baselines with the best one achieving 64%. A novel use of uncertainty estimation is proposed for selecting one of multiple data acquisitions for further processing and decision making.

Automated Diffusion Analysis for Non-Invasive Prediction of IDH Genotype in WHO Grade 2-3 Gliomas.

Wu J, Thust SC, Wastling SJ, Abdalla G, Benenati M, Maynard JA, Brandner S, Carrasco FP, Barkhof F

pubmed logopapersJun 10 2025
Glioma molecular characterization is essential for risk stratification and treatment planning. Noninvasive imaging biomarkers such as apparent diffusion coefficient (ADC) values have shown potential for predicting glioma genotypes. However, manual segmentation of gliomas is time-consuming and operator-dependent. To address this limitation, we aimed to establish a single-sequence-derived automatic ADC extraction pipeline using T2-weighted imaging to support glioma isocitrate dehydrogenase (IDH) genotyping. Glioma volumes from a hospital data set (University College London Hospitals; n=247) were manually segmented on T2-weighted MRI scans using ITK-Snap Toolbox and co-registered to ADC maps sequences using the FMRIB Linear Image Registration Tool in FSL, followed by ADC histogram extraction (Python). Separately, a nnUNet deep learning algorithm was trained to segment glioma volumes using T2w only from BraTS 2021 data (n=500, 80% training, 5% validation and 15% test split). nnUnet was then applied to the University College London Hospitals (UCLH) data for segmentation and ADC read-outs. Univariable logistic regression was used to test the performance manual and nnUNet derived ADC metrics for IDH status prediction. Statistical equivalence was tested (paired two-sided t-test). nnUnet segmentation achieved a median Dice of 0.85 on BraTS data, and 0.83 on UCLH data. For the best performing metric (rADCmean) the area under the receiver operating characteristic curve (AUC) for differentiating IDH-mutant from IDHwildtype gliomas was 0.82 (95% CI: 0.78-0.88), compared to the manual segmentation AUC 0.84 (95% CI: 0.77-0.89). For all ADC metrics, manually and nnUNet extracted ADC were statistically equivalent (p<0.01). nnUNet identified one area of glioma infiltration missed by human observers. In 0.8% gliomas, nnUnet missed glioma components. In 6% of cases, over-segmentation of brain remote from the tumor occurred (e.g. temporal poles). The T2w trained nnUnet algorithm achieved ADC readouts for IDH genotyping with a performance statistically equivalent to human observers. This approach could support rapid ADC based identification of glioblastoma at an early disease stage, even with limited input data. AUC = Area under the receiver operating characteristic curve, BraTS = The brain tumor segmentation challenge held by MICCAI, Dice = Dice Similarity Coefficient, IDH = Isocitrate dehydrogenase, mGBM = Molecular glioblastoma, ADCmin = Fifth ADC histogram percentile, ADCmean = Mean ADC value, ADCNAWM = ADC in the contralateral centrum semiovale normal white matter, rADCmin = Normalized ADCmin, VOI rADCmean = Normalized ADCmean.

Challenges and Advances in Classifying Brain Tumors: An Overview of Machine, Deep Learning, and Hybrid Approaches with Future Perspectives in Medical Imaging.

Alshomrani F

pubmed logopapersJun 10 2025
Accurate brain tumor classification is essential in neuro-oncology, as it directly informs treatment strategies and influences patient outcomes. This review comprehensively explores machine learning (ML) and deep learning (DL) models that enhance the accuracy and efficiency of brain tumor classification using medical imaging data, particularly Magnetic Resonance Imaging (MRI). As a noninvasive imaging technique, MRI plays a central role in detecting, segmenting, and characterizing brain tumors by providing detailed anatomical views that help distinguish various tumor types, including gliomas, meningiomas, and metastatic brain lesions. The review presents a detailed analysis of diverse ML approaches, from classical algorithms such as Support Vector Machines (SVM) and Decision Trees to advanced DL models, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and hybrid architectures that combine multiple techniques for improved performance. Through comparative analysis of recent studies across various datasets, the review evaluates these methods using metrics such as accuracy, sensitivity, specificity, and AUC-ROC, offering insights into their effectiveness and limitations. Significant challenges in the field are examined, including the scarcity of annotated datasets, computational complexity requirements, model interpretability issues, and barriers to clinical integration. The review proposes future directions to address these challenges, highlighting the potential of multi-modal imaging that combines MRI with other imaging modalities, explainable AI frameworks for enhanced model transparency, and privacy-preserving techniques for securing sensitive patient data. This comprehensive analysis demonstrates the transformative potential of ML and DL in advancing brain tumor diagnosis while emphasizing the necessity for continued research and innovation to overcome current limitations and ensure successful clinical implementation for improved patient care.
Page 6 of 1261257 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.