Sort by:
Page 69 of 1611610 results

Significance of Papillary and Trabecular Muscular Volume in Right Ventricular Volumetry with Cardiac MR Imaging.

Shibagaki Y, Oka H, Imanishi R, Shimada S, Nakau K, Takahashi S

pubmed logopapersJun 20 2025
Pulmonary valve regurgitation after repaired Tetralogy of Fallot (TOF) or double-outlet right ventricle (DORV) causes hypertrophy and papillary muscle enlargement. Cardiac magnetic resonance imaging (CMR) can evaluate the right ventricular (RV) dilatation, but the effect of trabecular and papillary muscle (TPM) exclusion on RV volume for TOF or DORV reoperation decision is unclear. Twenty-three patients with repaired TOF or DORV, and 19 healthy controls aged ≥15, underwent CMR from 2012 to 2022. TPM volume is measured by artificial intelligence. Reoperation was considered when RV end-diastolic volume index (RVEDVI) >150 mL/m<sup>2</sup> or RV end-systolic volume index (RVESVI) >80 mL/m<sup>2</sup>. RV volumes were higher in the disease group than controls (P α 0.001). RV mass and TPM volumes were higher in the disease group (P α 0.001). The reduction rate of RV volumes due to the exclusion of TPM volume was 6.3% (2.1-10.5), 11.7% (6.9-13.8), and 13.9% (9.5-19.4) in the control, volume load, and volume α pressure load groups, respectively. TPM/RV volumes were higher in the volume α pressure load group (control: 0.07 g/mL, volume: 0.14 g/mL, volume α pressure: 0.17 g/mL), and correlated with QRS duration (R α 0.77). In 3 patients in the volume α pressure, RV volume included TPM was indicated for reoperation, but when RV volume was reduced by TPM removal, reoperation was no indicated. RV volume measurements, including TPM in volume α pressure load, may help determine appropriate volume recommendations for reoperation.

Trans${^2}$-CBCT: A Dual-Transformer Framework for Sparse-View CBCT Reconstruction

Minmin Yang, Huantao Ren, Senem Velipasalar

arxiv logopreprintJun 20 2025
Cone-beam computed tomography (CBCT) using only a few X-ray projection views enables faster scans with lower radiation dose, but the resulting severe under-sampling causes strong artifacts and poor spatial coverage. We address these challenges in a unified framework. First, we replace conventional UNet/ResNet encoders with TransUNet, a hybrid CNN-Transformer model. Convolutional layers capture local details, while self-attention layers enhance global context. We adapt TransUNet to CBCT by combining multi-scale features, querying view-specific features per 3D point, and adding a lightweight attenuation-prediction head. This yields Trans-CBCT, which surpasses prior baselines by 1.17 dB PSNR and 0.0163 SSIM on the LUNA16 dataset with six views. Second, we introduce a neighbor-aware Point Transformer to enforce volumetric coherence. This module uses 3D positional encoding and attention over k-nearest neighbors to improve spatial consistency. The resulting model, Trans$^2$-CBCT, provides an additional gain of 0.63 dB PSNR and 0.0117 SSIM. Experiments on LUNA16 and ToothFairy show consistent gains from six to ten views, validating the effectiveness of combining CNN-Transformer features with point-based geometry reasoning for sparse-view CBCT reconstruction.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Segmentation of clinical imagery for improved epidural stimulation to address spinal cord injury

Matelsky, J. K., Sharma, P., Johnson, E. C., Wang, S., Boakye, M., Angeli, C., Forrest, G. F., Harkema, S. J., Tenore, F.

medrxiv logopreprintJun 20 2025
Spinal cord injury (SCI) can severely impair motor and autonomic function, with long-term consequences for quality of life. Epidural stimulation has emerged as a promising intervention, offering partial recovery by activating neural circuits below the injury. To make this therapy effective in practice, precise placement of stimulation electrodes is essential -- and that requires accurate segmentation of spinal cord structures in MRI data. We present a protocol for manual segmentation tailored to SCI anatomy, and evaluated a deep learning approach using a U-Net architecture to automate this segmentation process. Our approach yields accurate, efficient segmentation that identify potential electrode placement sites with high fidelity. Preliminary results suggest that this framework can accelerate SCI MRI analysis and improve planning for epidural stimulation, helping bridge the gap between advanced neurotechnologies and real-world clinical application with faster surgeries and more accurate electrode placement.

Evaluating ChatGPT's performance across radiology subspecialties: A meta-analysis of board-style examination accuracy and variability.

Nguyen D, Kim GHJ, Bedayat A

pubmed logopapersJun 20 2025
Large language models (LLMs) like ChatGPT are increasingly used in medicine due to their ability to synthesize information and support clinical decision-making. While prior research has evaluated ChatGPT's performance on medical board exams, limited data exist on radiology-specific exams especially considering prompt strategies and input modalities. This meta-analysis reviews ChatGPT's performance on radiology board-style questions, assessing accuracy across radiology subspecialties, prompt engineering methods, GPT model versions, and input modalities. Searches in PubMed and SCOPUS identified 163 articles, of which 16 met inclusion criteria after excluding irrelevant topics and non-board exam evaluations. Data extracted included subspecialty topics, accuracy, question count, GPT model, input modality, prompting strategies, and access dates. Statistical analyses included two-proportion z-tests, a binomial generalized linear model (GLM), and meta-regression with random effects (Stata v18.0, R v4.3.1). Across 7024 questions, overall accuracy was 58.83 % (95 % CI, 55.53-62.13). Performance varied widely by subspecialty, highest in emergency radiology (73.00 %) and lowest in musculoskeletal radiology (49.24 %). GPT-4 and GPT-4o significantly outperformed GPT-3.5 (p < .001), but visual inputs yielded lower accuracy (46.52 %) compared to textual inputs (67.10 %, p < .001). Prompting strategies showed significant improvement (p < .01) with basic prompts (66.23 %) compared to no prompts (59.70 %). A modest but significant decline in performance over time was also observed (p < .001). ChatGPT demonstrates promising but inconsistent performance in radiology board-style questions. Limitations in visual reasoning, heterogeneity across studies, and prompt engineering variability highlight areas requiring targeted optimization.

DSA-NRP: No-Reflow Prediction from Angiographic Perfusion Dynamics in Stroke EVT

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations

de Vette, S. P., Neh, H., van der Hoek, L., MacRae, D. C., Chu, H., Gawryszuk, A., Steenbakkers, R. J., van Ooijen, P. M., Fuller, C. D., Hutcheson, K. A., Langendijk, J. A., Sijtsema, N. M., van Dijk, L. V.

medrxiv logopreprintJun 20 2025
Background & purposeLate radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patients health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. Materials & methodsA multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade [&ge;]2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. ResultsDL models outperformed the conventional NTCP model in both the independent test set (AUC=0.80-0.84 versus 0.76) and external test set (AUC=0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. ConclusionDL NTCP models performed better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.

An Open-Source Generalizable Deep Learning Framework for Automated Corneal Segmentation in Anterior Segment Optical Coherence Tomography Imaging

Kandakji, L., Liu, S., Balal, S., Moghul, I., Allan, B., Tuft, S., Gore, D., Pontikos, N.

medrxiv logopreprintJun 20 2025
PurposeTo develop a deep learning model - Cornea nnU-Net Extractor (CUNEX) - for full-thickness corneal segmentation of anterior segment optical coherence tomography (AS-OCT) images and evaluate its utility in artificial intelligence (AI) research. MethodsWe trained and evaluated CUNEX using nnU-Net on 600 AS-OCT images (CSO MS-39) from 300 patients: 100 normal, 100 keratoconus (KC), and 100 Fuchs endothelial corneal dystrophy (FECD) eyes. To assess generalizability, we externally validated CUNEX on 1,168 AS-OCT images from an infectious keratitis dataset acquired from a different device (Casia SS-1000). We benchmarked CUNEX against two recent models, CorneaNet and ScLNet. We then applied CUNEX to our dataset of 194,599 scans from 37,499 patients as preprocessing for a classification model evaluating whether segmentation improves AI prediction, including age, sex, and disease staging (KC and FECD). ResultsCUNEX achieved Dice similarity coefficient (DSC) and intersection over union (IoU) scores ranging from 94-95% and 90-99%, respectively, across healthy, KC, and FECD eyes. This was similar to ScLNet (within 3%) but better than CorneaNet (8-35% lower). On external validation, CUNEX maintained high performance (DSC 83%; IoU 71%) while ScLNet (DSC 14%; IoU 8%) and CorneaNet (DSC 16%; IoU 9%) failed to generalize. Unexpectedly, segmentation minimally impacted classification accuracy except for sex prediction, where accuracy dropped from 81 to 68%, suggesting sex-related features may lie outside the cornea. ConclusionCUNEX delivers the first open-source generalizable corneal segmentation model using the latest framework, supporting its use in clinical analysis and AI workflows across diseases and imaging platforms. It is available at https://github.com/lkandakji/CUNEX.

Concordance between single-slice abdominal computed tomography-based and bioelectrical impedance-based analysis of body composition in a prospective study.

Fehrenbach U, Hosse C, Wienbrandt W, Walter-Rittel T, Kolck J, Auer TA, Blüthner E, Tacke F, Beetz NL, Geisel D

pubmed logopapersJun 19 2025
Body composition analysis (BCA) is a recognized indicator of patient frailty. Apart from the established bioelectrical impedance analysis (BIA), computed tomography (CT)-derived BCA is being increasingly explored. The aim of this prospective study was to directly compare BCA obtained from BIA and CT. A total of 210 consecutive patients scheduled for CT, including a high proportion of cancer patients, were prospectively enrolled. Immediately prior to the CT scan, all patients underwent BIA. CT-based BCA was performed using a single-slice AI tool for automated detection and segmentation at the level of the third lumbar vertebra (L3). BIA-based parameters, body fat mass (BFM<sub>BIA</sub>) and skeletal muscle mass (SMM<sub>BIA</sub>), CT-based parameters, subcutaneous and visceral adipose tissue area (SATA<sub>CT</sub> and VATA<sub>CT</sub>) and total abdominal muscle area (TAMA<sub>CT</sub>) were determined. Indices were calculated by normalizing the BIA and CT parameters to patient's weight (body fat percentage (BFP<sub>BIA</sub>) and body fat index (BFI<sub>CT</sub>)) or height (skeletal muscle index (SMI<sub>BIA</sub>) and lumbar skeletal muscle index (LSMI<sub>CT</sub>)). Parameters representing fat, BFM<sub>BIA</sub> and SATA<sub>CT</sub> + VATA<sub>CT</sub>, and parameters representing muscle tissue, SMM<sub>BIA</sub> and TAMA<sub>CT</sub>, showed strong correlations in female (fat: r = 0.95; muscle: r = 0.72; p < 0.001) and male (fat: r = 0.91; muscle: r = 0.71; p < 0.001) patients. Linear regression analysis was statistically significant (fat: R<sup>2</sup> = 0.73 (female) and 0.74 (male); muscle: R<sup>2</sup> = 0.56 (female) and 0.56 (male); p < 0.001), showing that BFI<sub>CT</sub> and LSMI<sub>CT</sub> allowed prediction of BFP<sub>BIA</sub> and SMI<sub>BIA</sub> for both sexes. CT-based BCA strongly correlates with BIA results and yields quantitative results for BFP and SMI comparable to the existing gold standard. Question CT-based body composition analysis (BCA) is moving more and more into clinical focus, but validation against established methods is lacking. Findings Fully automated CT-based BCA correlates very strongly with guideline-accepted bioelectrical impedance analysis (BIA). Clinical relevance BCA is currently moving further into clinical focus to improve assessment of patient frailty and individualize therapies accordingly. Comparability with established BIA strengthens the value of CT-based BCA and supports its translation into clinical routine.

Optimized YOLOv8 for enhanced breast tumor segmentation in ultrasound imaging.

Mostafa AM, Alaerjan AS, Aldughayfiq B, Allahem H, Mahmoud AA, Said W, Shabana H, Ezz M

pubmed logopapersJun 19 2025
Breast cancer significantly affects people's health globally, making early and accurate diagnosis vital. While ultrasound imaging is safe and non-invasive, its manual interpretation is subjective. This study explores machine learning (ML) techniques to improve breast ultrasound image segmentation, comparing models trained on combined versus separate classes of benign and malignant tumors. The YOLOv8 object detection algorithm is applied to the image segmentation task, aiming to capitalize on its robust feature detection capabilities. We utilized a dataset of 780 ultrasound images categorized into benign and malignant classes to train several deep learning (DL) models: UNet, UNet with DenseNet-121, VGG16, VGG19, and an adapted YOLOv8. These models were evaluated in two experimental setups-training on a combined dataset and training on separate datasets for benign and malignant classes. Performance metrics such as Dice Coefficient, Intersection over Union (IoU), and mean Average Precision (mAP) were used to assess model effectiveness. The study demonstrated substantial improvements in model performance when trained on separate classes, with the UNet model's F1-score increasing from 77.80 to 84.09% and Dice Coefficient from 75.58 to 81.17%, and the adapted YOLOv8 model achieving an F1-score improvement from 93.44 to 95.29% and Dice Coefficient from 82.10 to 84.40%. These results highlight the advantage of specialized model training and the potential of using advanced object detection algorithms for segmentation tasks. This research underscores the significant potential of using specialized training strategies and innovative model adaptations in medical imaging segmentation, ultimately contributing to better patient outcomes.
Page 69 of 1611610 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.