Sort by:
Page 170 of 3593587 results

Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction

Zhennan Xiao, Katharine Brudkiewicz, Zhen Yuan, Rosalind Aughwane, Magdalena Sokolska, Joanna Chappell, Trevor Gaunt, Anna L. David, Andrew P. King, Andrew Melbourne

arxiv logopreprintJul 17 2025
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.

Insights into a radiology-specialised multimodal large language model with sparse autoencoders

Kenza Bouzid, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Javier Alvarez-Valle, Stephanie L. Hyland

arxiv logopreprintJul 17 2025
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoders (SAEs), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations. Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts - including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features. We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2 - marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency. We release the trained SAEs and interpretations: https://huggingface.co/microsoft/maira-2-sae.

Super-resolution deep learning in pediatric CTA for congenital heart disease: enhancing intracardiac visualization under free-breathing conditions.

Zhou X, Xiong D, Liu F, Li J, Tan N, Duan X, Du X, Ouyang Z, Bao S, Ke T, Zhao Y, Tao J, Dong X, Wang Y, Liao C

pubmed logopapersJul 16 2025
This study assesses the effectiveness of super-resolution deep learning reconstruction (SR-DLR), conventional deep learning reconstruction (C-DLR), and hybrid iterative reconstruction (HIR) in enhancing image quality and diagnostic performance for pediatric congenital heart disease (CHD) in CT angiography (CCTA). A total of 91 pediatric patients aged 1-10 years, suspected of having CHD, were consecutively enrolled for CCTA under free-breathing conditions. Reconstructions were performed using SR-DLR, C-DLR, and HIR algorithms. Objective metrics-standard deviation (SD), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR)-were quantified. Two radiologists provided blinded subjective image quality evaluations. The full width at half maximum of lesions was significantly larger on SR-DLR (9.50 ± 6.44 mm) than on C-DLR (9.08 ± 6.23 mm; p < 0.001) and HIR (8.98 ± 6.37 mm; p < 0.001). SR-DLR exhibited superior performance with significantly reduced SD and increased SNR and CNR, particularly in the left ventricle, left atrium, and right ventricle regions (p < 0.05). Subjective evaluations favored SR-DLR over C-DLR and HIR (p < 0.05). The accuracy (99.12%), sensitivity (99.07%), and negative predictive value (85.71%) of SR-DLR were the highest, significantly exceeding those of C-DLR (+7.01%, +7.40%, and +45.71%) and HIR (+20.17%, +21.29%, and +65.71%), with statistically significant differences (p < 0.05 and p < 0.001). In the detection of atrial septal defects (ASDs) and ventricular septal defects (VSDs), SR-DLR demonstrated significantly higher sensitivity compared to C-DLR (+8.96% and +9.09%) and HIR (+20.90% and +36.36%). For multi-perforated ASDs and VSDs, SR-DLR's sensitivity reached 85.71% and 100%, far surpassing C-DLR and HIR. SR-DLR significantly reduces image noise and enhances resolution, improving the diagnostic visualization of CHD structures in pediatric patients. It outperforms existing algorithms in detecting small lesions, achieving diagnostic accuracy close to that of ultrasound. Question Pediatric cardiac computed tomography angiography (CCTA) often fails to adequately visualize intracardiac structures, creating diagnostic challenges for CHD, particularly complex multi-perforated atrioventricular defects. Findings SR-DLR markedly improves image quality and diagnostic accuracy, enabling detailed visualization and precise detection of small congenital lesions. Clinical relevance SR-DLR enhances the diagnostic confidence and accuracy of CCTA in pediatric CHD, reducing missed diagnoses and improving the characterization of complex intracardiac anomalies, thus supporting better clinical decision-making.

Imaging analysis using Artificial Intelligence to predict outcomes after endovascular aortic aneurysm repair: protocol for a retrospective cohort study.

Lareyre F, Raffort J, Kakkos SK, D'Oria M, Nasr B, Saratzis A, Antoniou GA, Hinchliffe RJ

pubmed logopapersJul 16 2025
Endovascular aortic aneurysm repair (EVAR) requires long-term surveillance to detect and treat postoperative complications. However, prediction models to optimise follow-up strategies are still lacking. The primary objective of this study is to develop predictive models of post-operative outcomes following elective EVAR using Artificial Intelligence (AI)-driven analysis. The secondary objective is to investigate morphological aortic changes following EVAR. This international, multicentre, observational study will retrospectively include 500 patients who underwent elective EVAR. Primary outcomes are EVAR postoperative complications including deaths, re-interventions, endoleaks, limb occlusion and stent-graft migration occurring within 1 year and at mid-term follow-up (1 to 3 years). Secondary outcomes are aortic anatomical changes. Morphological changes following EVAR will be analysed and compared based on preoperative and postoperative CT angiography (CTA) images (within 1 to 12 months, and at the last follow-up) using the AI-based software PRAEVAorta 2 (Nurea). Deep learning algorithms will be applied to stratify the risk of postoperative outcomes into low or high-risk categories. The training and testing dataset will be respectively composed of 70% and 30% of the cohort. The study protocol is designed to ensure that the sponsor and the investigators comply with the principles of the Declaration of Helsinki and the ICH E6 good clinical practice guideline. The study has been approved by the ethics committee of the University Hospital of Patras (Patras, Greece) under the number 492/05.12.2024. The results of the study will be presented at relevant national and international conferences and submitted for publication to peer-review journals.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Real-time, inline quantitative MRI enabled by scanner-integrated machine learning: a proof of principle with NODDI

Samuel Rot, Iulius Dragonu, Christina Triantafyllou, Matthew Grech-Sollars, Anastasia Papadaki, Laura Mancini, Stephen Wastling, Jennifer Steeden, John Thornton, Tarek Yousry, Claudia A. M. Gandini Wheeler-Kingshott, David L. Thomas, Daniel C. Alexander, Hui Zhang

arxiv logopreprintJul 16 2025
Purpose: The clinical feasibility and translation of many advanced quantitative MRI (qMRI) techniques are inhibited by their restriction to 'research mode', due to resource-intensive, offline parameter estimation. This work aimed to achieve 'clinical mode' qMRI, by real-time, inline parameter estimation with a trained neural network (NN) fully integrated into a vendor's image reconstruction environment, therefore facilitating and encouraging clinical adoption of advanced qMRI techniques. Methods: The Siemens Image Calculation Environment (ICE) pipeline was customised to deploy trained NNs for advanced diffusion MRI parameter estimation with Open Neural Network Exchange (ONNX) Runtime. Two fully-connected NNs were trained offline with data synthesised with the neurite orientation dispersion and density imaging (NODDI) model, using either conventionally estimated (NNMLE) or ground truth (NNGT) parameters as training labels. The strategy was demonstrated online with an in vivo acquisition and evaluated offline with synthetic test data. Results: NNs were successfully integrated and deployed natively in ICE, performing inline, whole-brain, in vivo NODDI parameter estimation in <10 seconds. DICOM parametric maps were exported from the scanner for further analysis, generally finding that NNMLE estimates were more consistent than NNGT with conventional estimates. Offline evaluation confirms that NNMLE has comparable accuracy and slightly better noise robustness than conventional fitting, whereas NNGT exhibits compromised accuracy at the benefit of higher noise robustness. Conclusion: Real-time, inline parameter estimation with the proposed generalisable framework resolves a key practical barrier to clinical uptake of advanced qMRI methods and enables their efficient integration into clinical workflows.

Comparative study of 2D vs. 3D AI-enhanced ultrasound for fetal crown-rump length evaluation in the first trimester.

Zhang Y, Huang Y, Chen C, Hu X, Pan W, Luo H, Huang Y, Wang H, Cao Y, Yi Y, Xiong Y, Ni D

pubmed logopapersJul 16 2025
Accurate fetal growth evaluation is crucial for monitoring fetal health, with crown-rump length (CRL) being the gold standard for estimating gestational age and assessing growth during the first trimester. To enhance CRL evaluation accuracy and efficiency, we developed an artificial intelligence (AI)-based model (3DCRL-Net) using the 3D U-Net architecture for automatic landmark detection to achieve CRL plane localization and measurement in 3D ultrasound. We then compared its performance to that of experienced radiologists using both 2D and 3D ultrasound for fetal growth assessment. This prospective consecutive study collected fetal data from 1,326 ultrasound screenings conducted at 11-14 weeks of gestation (June 2021 to June 2023). Three experienced radiologists performed fetal screening using 2D video (2D-RAD) and 3D volume (3D-RAD) to obtain the CRL plane and measurement. The 3DCRL-Net model automatically outputs the landmark position, CRL plane localization and measurement. Three specialists audited the planes achieved by radiologists and 3DCRL-Net as standard or non-standard. The performance of CRL landmark detection, plane localization, measurement and time efficiency was evaluated in the internal testing dataset, comparing results with 3D-RAD. In the external dataset, CRL plane localization, measurement accuracy, and time efficiency were compared among the three groups. The internal dataset consisted of 126 cases in the testing set (training: validation: testing = 8:1:1), and the external dataset included 245 cases. On the internal testing set, 3DCRL-Net achieved a mean absolute distance error of 1.81 mm for the nine landmarks, higher accuracy in standard plane localization compared to 3D-RAD (91.27% vs. 80.16%), and strong consistency in CRL measurements (mean absolute error (MAE): 1.26 mm; mean difference: 0.37 mm, P = 0.70). The average time required per fetal case was 2.02 s for 3DCRL-Net versus 2 min for 3D-RAD (P < 0.001). On the external testing dataset, 3DCRL-Net demonstrated high performance in standard plane localization, achieving results comparable to 2D-RAD and 3D-RAD (accuracy: 91.43% vs. 93.06% vs. 86.12%), with strong consistency in CRL measurements, compared to 2D-RAD, which showed an MAE of 1.58 mm and a mean difference of 1.12 mm (P = 0.25). For 2D-RAD vs. 3DCRL-Net, the Pearson correlation and R² were 0.96 and 0.93, respectively, with an MAE of 0.11 ± 0.12 weeks. The average time required per fetal case was 5 s for 3DCRL-Net, compared to 2 min for 3D-RAD and 35 s for 2D-RAD (P < 0.001). The 3DCRL-Net model provides a rapid, accurate, and fully automated solution for CRL measurement in 3D ultrasound, achieving expert-level performance and significantly improving the efficiency and reliability of first-trimester fetal growth assessment.

Specific Contribution of the Cerebellar Inferior Posterior Lobe to Motor Learning in Degenerative Cerebellar Ataxia.

Bando K, Honda T, Ishikawa K, Shirai S, Yabe I, Ishihara T, Onodera O, Higashiyama Y, Tanaka F, Kishimoto Y, Katsuno M, Shimizu T, Hanajima R, Kanata T, Takahashi Y, MizusawaMD H

pubmed logopapersJul 16 2025
Degenerative cerebellar ataxia, a group of progressive neurodegenerative disorders, is characterised by cerebellar atrophy and impaired motor learning. Using CerebNet, a deep learning algorithm for cerebellar segmentation, this study investigated the relationship between cerebellar subregion volumes and motor learning ability. We analysed data from 37 patients with degenerative cerebellar ataxia and 18 healthy controls. Using CerebNet, we segmented four cerebellar subregions: the anterior lobe, superior posterior lobe, inferior posterior lobe, and vermis. Regression analyses examined the associations between cerebellar volumes and motor learning performance (adaptation index [AI]) and ataxia severity (Scale for Assessment and Rating of Ataxia [SARA]). The inferior posterior lobe volume showed a significant positive association with AI in both single (B = 0.09; 95% CI: [0.03, 0.16]) and multiple linear regression analyses (B = 0.11; 95% CI: [0.008, 0.20]), an association that was particularly evident in the pure cerebellar ataxia subgroup. SARA scores correlated with anterior lobe, superior posterior lobe, and vermis volumes in single linear regression analyses, but these associations were not maintained in multiple linear regression analyses. This selective association suggests a specialised role for the inferior posterior lobe in motor learning processes. This study reveals the inferior posterior lobe's distinct role in motor learning in patients with degenerative cerebellar ataxia, advancing our understanding of cerebellar function and potentially informing targeted rehabilitation approaches. Our findings highlight the value of advanced imaging technologies in understanding structure-function relationships in cerebellar disorders.

Cross-Modal conditional latent diffusion model for Brain MRI to Ultrasound image translation.

Jiang S, Wang L, Li Y, Yang Z, Zhou Z, Li B

pubmed logopapersJul 16 2025
Intraoperative brain ultrasound (US) provides real-time information on lesions and tissues, making it crucial for brain tumor resection. However, due to limitations such as imaging angles and operator techniques, US data is limited in size and difficult to annotate, hindering advancements in intelligent image processing. In contrast, Magnetic Resonance Imaging (MRI) data is more abundant and easier to annotate. If MRI data and models can be effectively transferred to the US domain, generating high-quality US data would greatly enhance US image processing and improve intraoperative US readability.&#xD;Approach. We propose a Cross-Modal Conditional Latent Diffusion Model (CCLD) for brain MRI-to-US image translation. We employ a noise mask restoration strategy to pretrain an efficient encoder-decoder, enhancing feature extraction, compression, and reconstruction capabilities while reducing computational costs. Furthermore, CCLD integrates the Frequency-Decomposed Feature Optimization Module (FFOM) and the Adaptive Multi-Frequency Feature Fusion Module (AMFM) to effectively leverage MRI structural information and US texture characteristics, ensuring structural accuracy while enhancing texture details in the synthetic US images.&#xD;Main results. Compared with state-of-the-art methods, our approach achieves superior performance on the ReMIND dataset, obtaining the best Learned Perceptual Image Patch Similarity (LPIPS) score of 19.1%, Mean Absolute Error (MAE) of 4.21%, as well as the highest Peak Signal-to-Noise Ratio (PSNR) of 25.36 dB and Structural Similarity Index (SSIM) of 86.91%. &#xD;Significance. Experimental results demonstrate that CCLD effectively improves the quality and realism of synthetic ultrasound images, offering a new research direction for the generation of high-quality US datasets and the enhancement of ultrasound image readability.&#xD.

Automatic segmentation of liver structures in multi-phase MRI using variants of nnU-Net and Swin UNETR.

Raab F, Strotzer Q, Stroszczynski C, Fellner C, Einspieler I, Haimerl M, Lang EW

pubmed logopapersJul 16 2025
Accurate segmentation of the liver parenchyma, portal veins, hepatic veins, and lesions from MRI is important for hepatic disease monitoring and treatment. Multi-phase contrast enhanced imaging is superior in distinguishing hepatic structures compared to single-phase approaches, but automated approaches for detailed segmentation of hepatic structures are lacking. This study evaluates deep learning architectures for segmenting liver structures from multi-phase Gd-EOB-DTPA-enhanced T1-weighted VIBE MRI scans. We utilized 458 T1-weighted VIBE scans of pathological livers, with 78 manually labeled for liver parenchyma, hepatic and portal veins, aorta, lesions, and ascites. An additional dataset of 47 labeled subjects was used for cross-scanner evaluation. Three models were evaluated using nested cross-validation: the conventional nnU-Net, the ResEnc nnU-Net, and the Swin UNETR. The late arterial phase was identified as the optimal fixed phase for co-registration. Both nnU-Net variants outperformed Swin UNETR across most tasks. The conventional nnU-Net achieved the highest segmentation performance for liver parenchyma (DSC: 0.97; 95% CI 0.97, 0.98), portal vein (DSC: 0.83; 95% CI 0.80, 0.87), and hepatic vein (DSC: 0.78; 95% CI 0.77, 0.80). Lesion and ascites segmentation proved challenging for all models, with the conventional nnU-Net performing best. This study demonstrates the effectiveness of deep learning, particularly nnU-Net variants, for detailed liver structure segmentation from multi-phase MRI. The developed models and preprocessing pipeline offer potential for improved liver disease assessment and surgical planning in clinical practice.
Page 170 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.