Sort by:
Page 99 of 3993982 results

Real-time, inline quantitative MRI enabled by scanner-integrated machine learning: a proof of principle with NODDI

Samuel Rot, Iulius Dragonu, Christina Triantafyllou, Matthew Grech-Sollars, Anastasia Papadaki, Laura Mancini, Stephen Wastling, Jennifer Steeden, John Thornton, Tarek Yousry, Claudia A. M. Gandini Wheeler-Kingshott, David L. Thomas, Daniel C. Alexander, Hui Zhang

arxiv logopreprintJul 16 2025
Purpose: The clinical feasibility and translation of many advanced quantitative MRI (qMRI) techniques are inhibited by their restriction to 'research mode', due to resource-intensive, offline parameter estimation. This work aimed to achieve 'clinical mode' qMRI, by real-time, inline parameter estimation with a trained neural network (NN) fully integrated into a vendor's image reconstruction environment, therefore facilitating and encouraging clinical adoption of advanced qMRI techniques. Methods: The Siemens Image Calculation Environment (ICE) pipeline was customised to deploy trained NNs for advanced diffusion MRI parameter estimation with Open Neural Network Exchange (ONNX) Runtime. Two fully-connected NNs were trained offline with data synthesised with the neurite orientation dispersion and density imaging (NODDI) model, using either conventionally estimated (NNMLE) or ground truth (NNGT) parameters as training labels. The strategy was demonstrated online with an in vivo acquisition and evaluated offline with synthetic test data. Results: NNs were successfully integrated and deployed natively in ICE, performing inline, whole-brain, in vivo NODDI parameter estimation in <10 seconds. DICOM parametric maps were exported from the scanner for further analysis, generally finding that NNMLE estimates were more consistent than NNGT with conventional estimates. Offline evaluation confirms that NNMLE has comparable accuracy and slightly better noise robustness than conventional fitting, whereas NNGT exhibits compromised accuracy at the benefit of higher noise robustness. Conclusion: Real-time, inline parameter estimation with the proposed generalisable framework resolves a key practical barrier to clinical uptake of advanced qMRI methods and enables their efficient integration into clinical workflows.

Specific Contribution of the Cerebellar Inferior Posterior Lobe to Motor Learning in Degenerative Cerebellar Ataxia.

Bando K, Honda T, Ishikawa K, Shirai S, Yabe I, Ishihara T, Onodera O, Higashiyama Y, Tanaka F, Kishimoto Y, Katsuno M, Shimizu T, Hanajima R, Kanata T, Takahashi Y, MizusawaMD H

pubmed logopapersJul 16 2025
Degenerative cerebellar ataxia, a group of progressive neurodegenerative disorders, is characterised by cerebellar atrophy and impaired motor learning. Using CerebNet, a deep learning algorithm for cerebellar segmentation, this study investigated the relationship between cerebellar subregion volumes and motor learning ability. We analysed data from 37 patients with degenerative cerebellar ataxia and 18 healthy controls. Using CerebNet, we segmented four cerebellar subregions: the anterior lobe, superior posterior lobe, inferior posterior lobe, and vermis. Regression analyses examined the associations between cerebellar volumes and motor learning performance (adaptation index [AI]) and ataxia severity (Scale for Assessment and Rating of Ataxia [SARA]). The inferior posterior lobe volume showed a significant positive association with AI in both single (B = 0.09; 95% CI: [0.03, 0.16]) and multiple linear regression analyses (B = 0.11; 95% CI: [0.008, 0.20]), an association that was particularly evident in the pure cerebellar ataxia subgroup. SARA scores correlated with anterior lobe, superior posterior lobe, and vermis volumes in single linear regression analyses, but these associations were not maintained in multiple linear regression analyses. This selective association suggests a specialised role for the inferior posterior lobe in motor learning processes. This study reveals the inferior posterior lobe's distinct role in motor learning in patients with degenerative cerebellar ataxia, advancing our understanding of cerebellar function and potentially informing targeted rehabilitation approaches. Our findings highlight the value of advanced imaging technologies in understanding structure-function relationships in cerebellar disorders.

Multi-DECT image-based radiomics with interpretable machine learning for preoperative prediction of tumor budding grade and prognosis in colorectal cancer: a dual-center study.

Lin G, Chen W, Chen Y, Cao J, Mao W, Xia S, Chen M, Xu M, Lu C, Ji J

pubmed logopapersJul 16 2025
This study evaluates the predictive ability of multiparametric dual-energy computed tomography (multi-DECT) radiomics for tumor budding (TB) grade and prognosis in patients with colorectal cancer (CRC). This study comprised 510 CRC patients at two institutions. The radiomics features of multi-DECT images (including polyenergetic, virtual monoenergetic, iodine concentration [IC], and effective atomic number images) were screened to build radiomics models utilizing nine machine learning (ML) algorithms. An ML-based fusion model comprising clinical-radiological variables and radiomics features was developed. The assessment of model performance was conducted through the area under the receiver operating characteristic curve (AUC), while the model's interpretability was assessed by shapley additive explanation (SHAP). The prognostic significance of the fusion model was determined via survival analysis. The CT-reported lymph node status and normalized IC were used to develop a clinical-radiological model. Among the nine examined ML algorithms, the extreme gradient boosting (XGB) algorithm performed best. The XGB-based fusion model containing multi-DECT radiomics features outperformed the clinical-radiological model in predicting TB grade, demonstrating superior AUCs of 0.969 in the training cohort, 0.934 in the internal validation cohort, and 0.897 in the external validation cohort. The SHAP analysis identified variables influencing model predictions. Patients with a model-predicted high TB grade had worse recurrence-free survival (RFS) in both the training (P < 0.001) and internal validation (P = 0.016) cohorts. The XGB-based fusion model using multi-DECT radiomics could serve as a non-invasive tool to predict TB grade and RFS in patients with CRC preoperatively.

Multimodal Large Language Model With Knowledge Retrieval Using Flowchart Embedding for Forming Follow-Up Recommendations for Pancreatic Cystic Lesions.

Zhu Z, Liu J, Hong CW, Houshmand S, Wang K, Yang Y

pubmed logopapersJul 16 2025
<b>BACKGROUND</b>. The American College of Radiology (ACR) Incidental Findings Committee (IFC) algorithm provides guidance for pancreatic cystic lesion (PCL) management. Its implementation using plain-text large language model (LLM) solutions is challenging given that key components include multimodal data (e.g., figures and tables). <b>OBJECTIVE</b>. The purpose of the study is to evaluate a multimodal LLM approach incorporating knowledge retrieval using flowchart embedding for forming follow-up recommendations for PCL management. <b>METHODS</b>. This retrospective study included patients who underwent abdominal CT or MRI from September 1, 2023, to September 1, 2024, and whose report mentioned a PCL. The reports' Findings sections were inputted to a multimodal LLM (GPT-4o). For task 1 (198 patients: mean age, 69.0 ± 13.0 [SD] years; 110 women, 88 men), the LLM assessed PCL features (presence of PCL, PCL size and location, presence of main pancreatic duct communication, presence of worrisome features or high-risk stigmata) and formed a follow-up recommendation using three knowledge retrieval methods (default knowledge, plain-text retrieval-augmented generation [RAG] from the ACR IFC algorithm PDF document, and flowchart embedding using the LLM's image-to-text conversion for in-context integration of the document's flowcharts and tables). For task 2 (85 patients: mean initial age, 69.2 ± 10.8 years; 48 women, 37 men), an additional relevant prior report was inputted; the LLM assessed for interval PCL change and provided an adjusted follow-up schedule accounting for prior imaging using flowchart embedding. Three radiologists assessed LLM accuracy in task 1 for PCL findings in consensus and follow-up recommendations independently; one radiologist assessed accuracy in task 2. <b>RESULTS</b>. For task 1, the LLM with flowchart embedding had accuracy for PCL features of 98.0-99.0%. The accuracy of the LLM follow-up recommendations based on default knowledge, plain-text RAG, and flowchart embedding for radiologist 1 was 42.4%, 23.7%, and 89.9% (<i>p</i> < .001), respectively; radiologist 2 was 39.9%, 24.2%, and 91.9% (<i>p</i> < .001); and radiologist 3 was 40.9%, 25.3%, and 91.9% (<i>p</i> < .001). For task 2, the LLM using flowchart embedding showed an accuracy for interval PCL change of 96.5% and for adjusted follow-up schedules of 81.2%. <b>CONCLUSION</b>. Multimodal flowchart embedding aided the LLM's automated provision of follow-up recommendations adherent to a clinical guidance document. <b>CLINICAL IMPACT</b>. The framework could be extended to other incidental findings through the use of other clinical guidance documents as the model input.

Deep learning for appendicitis: development of a three-dimensional localization model on CT.

Takaishi T, Kawai T, Kokubo Y, Fujinaga T, Ojio Y, Yamamoto T, Hayashi K, Owatari Y, Ito H, Hiwatashi A

pubmed logopapersJul 16 2025
To develop and evaluate a deep learning model for detecting appendicitis on abdominal CT. This retrospective single-center study included 567 CTs of appendicitis patients (330 males, age range 20-96) obtained between 2011 and 2020, randomly split into training (n = 517) and validation (n = 50) sets. The validation set was supplemented with 50 control CTs performed for acute abdomen. For a test dataset, 100 appendicitis CTs and 100 control CTs were consecutively collected from a separate period after 2021. Exclusion criteria included age < 20, perforation, unclear appendix, and appendix tumors. Appendicitis CTs were annotated with three-dimensional bounding boxes that encompassed inflamed appendices. CT protocols were unenhanced, 5-mm slice-thickness, 512 × 512 pixel matrix. The deep learning algorithm was based on faster region convolutional neural network (Faster R-CNN). Two board-certified radiologists visually graded model predictions on the test dataset using a 5-point Likert scale (0: no detection, 1: false, 2: poor, 3: fair, 4: good), with scores ≥ 3 considered true positives. Inter-rater agreement was assessed using weighted kappa statistics. The effects of intra-abdominal fat, periappendiceal fat-stranding, presence of appendicolith, and appendix diameter on the model's recall were analyzed using binary logistic regression. The model showed a precision of 0.66 (87/132), a recall of 0.87 (87/100), and a false-positive rate per patient of 0.23 (45/200). The inter-rater agreement for Likert scores of 2-4 was κ = 0.76. The logistic regression analysis showed that only intra-abdominal fat had a significant impact on the model's precision (p = 0.02). We developed a model capable of detecting appendicitis on CT with a three-dimensional bounding box.

Scaling Chest X-ray Foundation Models from Mixed Supervisions for Dense Prediction.

Wang F, Yu L

pubmed logopapersJul 16 2025
Foundation models have significantly revolutionized the field of chest X-ray diagnosis with their ability to transfer across various diseases and tasks. However, previous works have predominantly utilized self-supervised learning from medical image-text pairs, which falls short in dense medical prediction tasks due to their sole reliance on such coarse pair supervision, thereby limiting their applicability to detailed diagnostics. In this paper, we introduce a Dense Chest X-ray Foundation Model (DCXFM), which utilizes mixed supervision types (i.e., text, label, and segmentation masks) to significantly enhance the scalability of foundation models across various medical tasks. Our model involves two training stages: we first employ a novel self-distilled multimodal pretraining paradigm to exploit text and label supervision, along with local-to-global self-distillation and soft cross-modal contrastive alignment strategies to enhance localization capabilities. Subsequently, we introduce an efficient cost aggregation module, comprising spatial and class aggregation mechanisms, to further advance dense prediction tasks with densely annotated datasets. Comprehensive evaluations on three tasks (phrase grounding, zero-shot semantic segmentation, and zero-shot classification) demonstrate DCXFM's superior performance over other state-of-the-art medical image-text pretraining models. Remarkably, DCXFM exhibits powerful zero-shot capabilities across various datasets in phrase grounding and zero-shot semantic segmentation, underscoring its superior generalization in dense prediction tasks.

Evaluating Artificial Intelligence-Assisted Prostate Biparametric MRI Interpretation: An International Multireader Study.

Gelikman DG, Yilmaz EC, Harmon SA, Huang EP, An JY, Azamat S, Law YM, Margolis DJA, Marko J, Panebianco V, Esengur OT, Lin Y, Belue MJ, Gaur S, Bicchetti M, Xu Z, Tetreault J, Yang D, Xu D, Lay NS, Gurram S, Shih JH, Merino MJ, Lis R, Choyke PL, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersJul 16 2025
<b>Background:</b> Variability in prostate biparametric MRI (bpMRI) interpretation limits diagnostic reliability for prostate cancer (PCa). Artificial intelligence (AI) has potential to reduce this variability and improve diagnostic accuracy. <b>Objective:</b> The objective of this study was to evaluate impact of a deep learning AI model on lesion- and patient-level clinically significant PCa (csPCa) and PCa detection rates and interreader agreement in bpMRI interpretations. <b>Methods:</b> This retrospective, multireader, multicenter study used a balanced incomplete block design for MRI randomization. Six radiologists of varying experience interpreted bpMRI scans with and without AI assistance in alternating sessions. The reference standard for lesion-level detection for cases was whole-mount pathology after radical prostatectomy; for control patients, negative 12-core systematic biopsies. In all, 180 patients (120 in the case group, 60 in the control group) who underwent mpMRI and prostate biopsy or radical prostatectomy between January 2013 and December 2022 were included. Lesion-level sensitivity, PPV, patient-level AUC for csPCa and PCa detection, and interreader agreement in lesion-level PI-RADS scores and size measurements were assessed. <b>Results:</b> AI assistance improved lesion-level PPV (PI-RADS ≥ 3: 77.2% [95% CI, 71.0-83.1%] vs 67.2% [61.1-72.2%] for csPCa; 80.9% [75.2-85.7%] vs 69.4% [63.4-74.1%] for PCa; both p < .001), reduced lesion-level sensitivity (PIRADS ≥ 3: 44.4% [38.6-50.5%] vs 48.0% [42.0-54.2%] for csPCa, p = .01; 41.7% [37.0-47.4%] vs 44.9% [40.5-50.2%] for PCa, p = .01), and no difference in patient-level AUC (0.822 [95% CI, 0.768-0.866] vs 0.832 [0.787-0.868] for csPCa, p = .61; 0.833 [0.782-0.874] vs 0.835 [0.792-0.871] for PCa, p = .91). AI assistance improved interreader agreement for lesion-level PI-RADS scores (κ = 0.748 [95% CI, 0.701-0.796] vs 0.336 [0.288-0.381], p < .001), lesion size measurements (coverage probability of 0.397 [0.376-0.419] vs 0.367 [0.349-0.383], p < .001), and patient-level PI-RADS scores (κ = 0.704 [0.627-0.767] versus 0.507 [0.421-0.584], p < .001). <b>Conclusion:</b> AI improved lesion-level PPV and interreader agreement with slightly lower lesion-level sensitivity. <b>Clinical Impact:</b> AI may enhance consistency and reduce false-positives in bpMRI interpretations. Further optimization is required to improve sensitivity without compromising specificity.

Distinguishing symptomatic and asymptomatic trigeminal nerves through radiomics and deep learning: A microstructural study in idiopathic TN patients and asymptomatic control group.

Cüce F, Tulum G, Karadaş Ö, Işik Mİ, Dur İnce M, Nematzadeh S, Jalili M, Baş N, Özcan B, Osman O

pubmed logopapersJul 16 2025
The relationship between mild neurovascular conflict (NVC) and trigeminal neuralgia (TN) remains ill-defined, especially as mild NVC is often seen in asymptomatic population without any facial pain. We aim to analyze the trigeminal nerve microstructure using artificial intelligence (AI) to distinguish symptomatic and asymptomatic nerves between idiopathic TN (iTN) and the asymptomatic control group with incidental grade‑1 NVC. Seventy-eight symptomatic trigeminal nerves with grade-1 NVC in iTN patients, and an asymptomatic control group consisting of Bell's palsy patients free from facial pain (91 grade-1 NVC and 91 grade-0 NVC), were included in the study. Three hundred seventy-eight radiomic features were extracted from the original MRI images and processed with Laplacian-of-Gaussian filters. The dataset was split into 80% training/validation and 20% testing. Nested cross-validation was employed on the training/validation set for feature selection and model optimization. Furthermore, using the same pipeline approach, two customized deep learning models, Dense Atrous Spatial Pyramid Pooling (ASPP) -201 and MobileASPPV2, were classified using the same pipeline approach, incorporating ASPP blocks. Performance was assessed over ten and five runs for radiomics-based and deep learning-based models. Subspace Discriminant Ensemble Learning (SDEL) attained an accuracy of 78.8%±7.13%, Support Vector Machines (SVM) reached 74.8%±9.2%, and K-nearest neighbors (KNN) achieved 79%±6.55%. Meanwhile, DenseASPP-201 recorded an accuracy of 82.0 ± 8.4%, and MobileASPPV2 achieved 73.2 ± 5.59%. The AI effectively distinguished symptomatic and asymptomatic nerves with grade‑1 NVC. Further studies are required to fully elucidate the impact of vascular and nonvascular etiologies that may lead to iTN.

Illuminating radiogenomic signatures in pediatric-type diffuse gliomas: insights into molecular, clinical, and imaging correlations. Part II: low-grade group.

Kurokawa R, Hagiwara A, Ito R, Ueda D, Saida T, Sakata A, Nishioka K, Sugawara S, Takumi K, Watabe T, Ide S, Kawamura M, Sofue K, Hirata K, Honda M, Yanagawa M, Oda S, Iima M, Naganawa S

pubmed logopapersJul 16 2025
The fifth edition of the World Health Organization classification of central nervous system tumors represents a significant advancement in the molecular-genetic classification of pediatric-type diffuse gliomas. This article comprehensively summarizes the clinical, molecular, and radiological imaging features in pediatric-type low-grade gliomas (pLGGs), including MYB- or MYBL1-altered tumors, polymorphous low-grade neuroepithelial tumor of the young (PLNTY), and diffuse low-grade glioma, MAPK pathway-altered. Most pLGGs harbor alterations in the RAS/MAPK pathway, functioning as "one pathway disease". Specific magnetic resonance imaging features, such as the T2-fluid-attenuated inversion recovery (FLAIR) mismatch sign in MYB- or MYBL1-altered tumors and the transmantle-like sign in PLNTYs, may serve as non-invasive biomarkers for underlying molecular alterations. Recent advances in radiogenomics have enabled the differentiation of BRAF fusion from BRAF V600E mutant tumors based on magnetic resonance imaging characteristics. Machine learning approaches have further enhanced our ability to predict molecular subtypes from imaging features. These radiology-molecular correlations offer potential clinical utility in treatment planning and prognostication, especially as targeted therapies against the MAPK pathway emerge. Continued research is needed to refine our understanding of genotype-phenotype correlations in less common molecular alterations and to validate these imaging biomarkers in larger cohorts.

Collaborative Integration of AI and Human Expertise to Improve Detection of Chest Radiograph Abnormalities.

Awasthi A, Le N, Deng Z, Wu CC, Nguyen HV

pubmed logopapersJul 16 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a collaborative AI system that integrates eye gaze data and radiology reports to improve diagnostic accuracy in chest radiograph interpretation by identifying and correcting perceptual errors. Materials and Methods This retrospective study utilized public datasets REFLACX and EGD-CXR to develop a collaborative AI solution, named Collaborative Radiology Expert (CoRaX). It employs a large multimodal model to analyze image embeddings, eye gaze data, and radiology reports, aiming to rectify perceptual errors in chest radiology. The proposed system was evaluated using two simulated error datasets featuring random and uncertain alterations of five abnormalities. Evaluation focused on the system's referral-making process, the quality of referrals, and its performance within collaborative diagnostic settings. Results In the random masking-based error dataset, 28.0% (93/332) of abnormalities were altered. The system successfully corrected 21.3% (71/332) of these errors, with 6.6% (22/332) remaining unresolved. The accuracy of the system in identifying the correct regions of interest for missed abnormalities was 63.0% [95% CI: 59.0%, 68.0%], and 85.7% (240/280) of interactions with radiologists were deemed satisfactory, meaning that the system provided diagnostic aid to radiologists. In the uncertainty-masking-based error dataset, 43.9% (146/332) of abnormalities were altered. The system corrected 34.6% (115/332) of these errors, with 9.3% (31/332) unresolved. The accuracy of predicted regions of missed abnormalities for this dataset was 58.0% [95% CI: 55.0%, 62.0%], and 78.4% (233/297) of interactions were satisfactory. Conclusion The CoRaX system can collaborate efficiently with radiologists and address perceptual errors across various abnormalities in chest radiographs. ©RSNA, 2025.
Page 99 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.