Sort by:
Page 66 of 6276269 results

Su TY, Hu S, Wang X, Adler S, Wagstyl K, Ding Z, Choi JY, Sakaie K, Blümcke I, Murakami H, Alexopoulos AV, Jones SE, Najm I, Ma D, Wang ZI

pubmed logopapersOct 9 2025
This study was undertaken to develop a framework for focal cortical dysplasia (FCD) detection using surface-based morphometric (SBM) analysis and machine learning (ML) applied to three-dimensional (3D) magnetic resonance fingerprinting (MRF). We included 114 subjects (44 patients with medically intractable focal epilepsy and FCD, 70 healthy controls [HCs]). All subjects underwent high-resolution 3-T MRF scans generating T1 and T2 maps. All patients had clinical T1-weighted (T1w) images; 35 also had 3D fluid-attenuated inversion recovery (FLAIR). A 3D region of interest (ROI) was manually created for each lesion. All maps/images and lesion ROIs were registered to T1w images. Surface-based features were extracted following the Multi-center Epilepsy Lesion Detection pipeline. Features were normalized using intrasubject, interhemispheric, and intersubject z-scoring. A two-stage ML approach was applied: a vertexwise neural network classifier for lesional versus normal vertices using T1w/MRF/FLAIR features, followed by a clusterwise Random Undersampling Boosting classifier to suppress false positives (FPs) based on cluster size, prediction probabilities, and feature statistics. Leave-one-out cross-validation was performed at both stages. Using T1w features, sensitivity was 70.4% with 11.6 FP clusters/patient and 4.1 in HCs. Adding MRF reduced FPs to 6.6 clusters/patient and 1.5 in HCs, with 68.2% sensitivity. Combining T1w, MRF, and FLAIR achieved 71.4% sensitivity, with 4.7 FPs/patient and 1.1 in HCs. Detection probabilities were significantly higher for true positive clusters than FPs (p < .001). Type II showed higher detection rates than non-type II. Magnetic resonance imaging (MRI)-positive patients showed higher detection rates and fewer FPs than MRI-negative patients. Seizure-free patients demonstrated higher detection rates than non-seizure-free patients. Subtyping accuracy was 80.8% for non-type II versus type II, and 68.4% for IIa versus IIb, although limited by small sample size. The transmantle sign was present in 61.5% of IIb and 40% of IIa cases. We developed an ML framework for FCD detection integrating SBM with clinical MRI and MRF. Advances include improved FP control and enhanced subtyping; selected model outputs may provide indicators of detection confidence and seizure outcome.

Jacquemyn X, Van den Eynde J, Rao S, Kutty S

pubmed logopapersOct 9 2025
Explore the clinical progression, diagnostic challenges, and evolving treatments of systemic right ventricular (SRV) failure, highlighting key gaps and advances. Recent evidence highlights the distinct pathophysiology of SRV failure and limited efficacy of conventional heart failure (HF) treatments. Emerging drugs like SGLT2 inhibitors are being studied for modulating ventricular remodeling and fibrosis. Echocardiography, enhanced by speckle-tracking and 3D imaging, is first-line, while cardiac MRI remains the gold standard for volumetric, functional, and tissue characterization. SRV-specific machine learning models improve prognostication and personalized care. Advances in transcatheter tricuspid valve interventions offer less invasive options for high-risk patients. In end-stage SRV failure, ventricular assist devices effectively unload the ventricle, enhance transplant candidacy, may be combined with tricuspid procedures, and are increasingly used as long-term destination therapy. SRV failure is a unique condition requiring personalized, multidisciplinary management, with advances in risk stratification and treatments shaping future care.

Wang S, Meng T, Peng L, Zeng Q

pubmed logopapersOct 9 2025
To investigate the potential feasibility of ultra-low-dose (ULD) liver CT with the artificial intelligence iterative reconstruction (AIIR). Sixty-five patients who underwent triphasic contrast-enhanced liver CT were prospectively enrolled. Low tube voltage (80/100 kV) and tube current (35 to 78 mAs) were set in both portal venous phase (PVP) and delayed phase (DP). For each phase, an ULD acquisition (1.11 to 2.50 mGy) was taken followed immediately by a routine-dose (RD) acquisition (11.71 to 19.73 mGy). RD images were reconstructed with a hybrid iterative reconstruction algorithm (RD-HIR), while ULD images were reconstructed with both HIR (ULD-HIR) and AIIR (ULD-AIIR). The noise power spectrum (NPS) noise magnitude, average NPS spatial frequency, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were calculated for the quantitative assessment. Qualitative assessment was performed by 2 radiologists who independently scored the images for diagnostic acceptance. In addition, the radiologists identified focal lesions and characterized noncystic lesions as benign or malignant with both RD and ULD liver CT. Among the enrolled patients (mean age: 58.6±12.9 y, 35 men), 234 lesions with a mean size of 1.27±1.56 cm were identified. In both phases, ULD-AIIR showed comparable NPS noise magnitude with RD-HIR (all P>0.017), and lower NPS noise than ULD-HIR (all P<0.001). Average NPS spatial frequency, SNR, and CNR were highest with ULD-AIIR, followed by RD-HIR and ULD-HIR (all P<0.001). ULD-AIIR showed comparable diagnostic acceptance scores with RD-HIR, while ULD-HIR failed to meet the diagnostic acceptance requirements. RD-HIR and ULD-AIIR achieved comparable detection rate (99.6% vs. 99.1%) and area under curve (AUC) of the receiver operating characteristic curve (ROC) in classifying benign (n=46) and malignant (n=58) noncystic lesions (0.98 vs. 0.97, P=0.3). With AIIR, it is potentially feasible to achieve ULD liver CT (60% dose reduction) while preserving the image and diagnostic quality.

Cheng F, Lin G, Chen W, Chen Y, Zhou R, Yang J, Zhou B, Chen M, Ji J

pubmed logopapersOct 9 2025
This study aimed to develop and validate a machine learning-based computed tomography (CT) radiomics method to preoperatively predict the presence of central lymph node metastasis (CLNM) in patients with papillary thyroid microcarcinoma (PTMC). A total of 921 patients with histopathologically proven PTMC from three medical centers were included in this retrospective study and divided into training, internal validation, external test 1, and external test 2 sets. Radiomics features of thyroid tumors were extracted from CT images and selected for dimensional reduction. Five machine learning classifiers were applied, and the best classifier was selected to calculate radiomics scores (rad-scores). Then, the rad-scores and clinical factors were combined to construct a nomogram model. In the four sets, 35.18% (324/921) patients were CLNM+. The XGBoost classifier showed the best performance, with the highest average area under the curve (AUC) of 0.756 in the validation set. The nomogram model incorporating XGBoost-based rad-scores with age and sex showed better performance than the clinical model in the training [AUC: 0.847(0.809-0.879) vs. 0.706(0.660-0.748)], internal validation [AUC: 0.773(0.682-0.847) vs. 0.671(0.575-0.758)], external test 1 [AUC: 0.807(0.757-0.852) vs. 0.639(0.580-0.695)], and external test 2 [AUC: 0.746(0.645-0.830) vs. 0.608(0.502-0.707)] sets. Furthermore, the nomogram showed better clinical benefit than the clinical and radiomics models. The nomogram model based on the XGBoost classifier exhibited favorable performance. This model provides a potential approach for the non-invasive diagnosis of CLNM in patients with PTMC. This study developed a potential surrogate of preoperative accurate evaluation of CLNM status, which is non-invasive and easy-to-use.

Riadh Bouslimi, Houda Trabelsi, Wahiba Ben Abdssalem Karaa, Hana Hedhli

arxiv logopreprintOct 9 2025
Traumatic brain injuries present significant diagnostic challenges in emergency medicine, where the timely interpretation of medical images is crucial for patient outcomes. In this paper, we propose a novel AI-based approach for automatic radiology report generation tailored to cranial trauma cases. Our model integrates an AC-BiFPN with a Transformer architecture to capture and process complex medical imaging data such as CT and MRI scans. The AC-BiFPN extracts multi-scale features, enabling the detection of intricate anomalies like intracranial hemorrhages, while the Transformer generates coherent, contextually relevant diagnostic reports by modeling long-range dependencies. We evaluate the performance of our model on the RSNA Intracranial Hemorrhage Detection dataset, where it outperforms traditional CNN-based models in both diagnostic accuracy and report generation. This solution not only supports radiologists in high-pressure environments but also provides a powerful educational tool for trainee physicians, offering real-time feedback and enhancing their learning experience. Our findings demonstrate the potential of combining advanced feature extraction with transformer-based text generation to improve clinical decision-making in the diagnosis of traumatic brain injuries.

Eirik A. Østmo, Kristoffer K. Wickstrøm, Keyur Radiya, Michael C. Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen

arxiv logopreprintOct 9 2025
Contrast-enhanced Computed Tomography (CT) is important for diagnosis and treatment planning for various medical conditions. Deep learning (DL) based segmentation models may enable automated medical image analysis for detecting and delineating tumors in CT images, thereby reducing clinicians' workload. Achieving generalization capabilities in limited data domains, such as radiology, requires modern DL models to be trained with image augmentation. However, naively applying augmentation methods developed for natural images to CT scans often disregards the nature of the CT modality, where the intensities measure Hounsfield Units (HU) and have important physical meaning. This paper challenges the use of such intensity augmentations for CT imaging and shows that they may lead to artifacts and poor generalization. To mitigate this, we propose a CT-specific augmentation technique, called Random windowing, that exploits the available HU distribution of intensities in CT images. Random windowing encourages robustness to contrast-enhancement and significantly increases model performance on challenging images with poor contrast or timing. We perform ablations and analysis of our method on multiple datasets, and compare to, and outperform, state-of-the-art alternatives, while focusing on the challenge of liver tumor segmentation.

Ming Jie Ong, Sze Yinn Ung, Sim Kuan Goh, Jimmy Y. Zhong

arxiv logopreprintOct 9 2025
The current study investigated the use of Explainable Artificial Intelligence (XAI) to improve the accuracy of brain tumor segmentation in MRI images, with the goal of assisting physicians in clinical decision-making. The study focused on applying UNet models for brain tumor segmentation and using the XAI techniques of Gradient-weighted Class Activation Mapping (Grad-CAM) and attention-based visualization to enhance the understanding of these models. Three deep learning models - UNet, Residual UNet (ResUNet), and Attention UNet (AttUNet) - were evaluated to identify the best-performing model. XAI was employed with the aims of clarifying model decisions and increasing physicians' trust in these models. We compared the performance of two UNet variants (ResUNet and AttUNet) with the conventional UNet in segmenting brain tumors from the BraTS2020 public dataset and analyzed model predictions with Grad-CAM and attention-based visualization. Using the latest computer hardware, we trained and validated each model using the Adam optimizer and assessed their performance with respect to: (i) training, validation, and inference times, (ii) segmentation similarity coefficients and loss functions, and (iii) classification performance. Notably, during the final testing phase, ResUNet outperformed the other models with respect to Dice and Jaccard similarity scores, as well as accuracy, recall, and F1 scores. Grad-CAM provided visuospatial insights into the tumor subregions each UNet model focused on while attention-based visualization provided valuable insights into the working mechanisms of AttUNet's attention modules. These results demonstrated ResUNet as the best-performing model and we conclude by recommending its use for automated brain tumor segmentation in future clinical assessments. Our source code and checkpoint are available at https://github.com/ethanong98/MultiModel-XAI-Brats2020

Pranav Sambhu, Om Guin, Madhav Sambhu, Jinho Cha

arxiv logopreprintOct 9 2025
This study evaluates whether integrating curriculum learning with diffusion-based synthetic augmentation can enhance the detection of difficult pulmonary nodules in chest radiographs, particularly those with low size, brightness, and contrast, which often challenge conventional AI models due to data imbalance and limited annotation. A Faster R-CNN with a Feature Pyramid Network (FPN) backbone was trained on a hybrid dataset comprising expert-labeled NODE21 (1,213 patients; 52.4 percent male; mean age 63.2 +/- 11.5 years), VinDr-CXR, CheXpert, and 11,206 DDPM-generated synthetic images. Difficulty scores based on size, brightness, and contrast guided curriculum learning. Performance was compared to a non-curriculum baseline using mean average precision (mAP), Dice score, and area under the curve (AUC). Statistical tests included bootstrapped confidence intervals, DeLong tests, and paired t-tests. The curriculum model achieved a mean AUC of 0.95 versus 0.89 for the baseline (p < 0.001), with improvements in sensitivity (70 percent vs. 48 percent) and accuracy (82 percent vs. 70 percent). Stratified analysis demonstrated consistent gains across all difficulty bins (Easy to Very Hard). Grad-CAM visualizations confirmed more anatomically focused attention under curriculum learning. These results suggest that curriculum-guided synthetic augmentation enhances model robustness and generalization for pulmonary nodule detection.

Alexander Herold, Daniel Sobotka, Lucian Beer, Nina Bastati, Sarah Poetter-Lang, Michael Weber, Thomas Reiberger, Mattias Mandorfer, Georg Semmler, Benedikt Simbrunner, Barbara D. Wichtmann, Sami A. Ba-Ssalamah, Michael Trauner, Ahmed Ba-Ssalamah, Georg Langs

arxiv logopreprintOct 9 2025
Background: We aimed to quantify hepatic vessel volumes across chronic liver disease stages and healthy controls using deep learning-based magnetic resonance imaging (MRI) analysis, and assess correlations with biomarkers for liver (dys)function and fibrosis/portal hypertension. Methods: We assessed retrospectively healthy controls, non-advanced and advanced chronic liver disease (ACLD) patients using a 3D U-Net model for hepatic vessel segmentation on portal venous phase gadoxetic acid-enhanced 3-T MRI. Total (TVVR), hepatic (HVVR), and intrahepatic portal vein-to-volume ratios (PVVR) were compared between groups and correlated with: albumin-bilirubin (ALBI) and model for end-stage liver disease-sodium (MELD-Na) score, and fibrosis/portal hypertension (Fibrosis-4 [FIB-4] score, liver stiffness measurement [LSM], hepatic venous pressure gradient [HVPG], platelet count [PLT], and spleen volume). Results: We included 197 subjects, aged 54.9 $\pm$ 13.8 years (mean $\pm$ standard deviation), 111 males (56.3\%): 35 healthy controls, 44 non-ACLD, and 118 ACLD patients. TVVR and HVVR were highest in controls (3.9; 2.1), intermediate in non-ACLD (2.8; 1.7), and lowest in ACLD patients (2.3; 1.0) ($p \leq 0.001$). PVVR was reduced in both non-ACLD and ACLD patients (both 1.2) compared to controls (1.7) ($p \leq 0.001$), but showed no difference between CLD groups ($p = 0.999$). HVVR significantly correlated indirectly with FIB-4, ALBI, MELD-Na, LSM, and spleen volume ($\rho$ ranging from -0.27 to -0.40), and directly with PLT ($\rho = 0.36$). TVVR and PVVR showed similar but weaker correlations. Conclusions: Deep learning-based hepatic vessel volumetry demonstrated differences between healthy liver and chronic liver disease stages and shows correlations with established markers of disease severity.

Zhai Q, Cui M, Fu Y, Huang X, Wang Z, Wu Q, Cong N, Liu C

pubmed logopapersOct 9 2025
Nasal septum deviation (NSD) is one of the contributing factors to impaired nasal function and dentofacial developmental abnormalities. Although cone-beam computed tomography (CBCT) is clinically valuable for NSD diagnosis, manual interpretation remains labor-intensive and expertise-dependent. Our study included 330 CBCT scans diagnosed with either NSD or non-NSD to develop an automated 2-stage artificial intelligence (AI) framework integrating real-time detection and classification for NSD screening. In the first stage, the YOLOv11 (You Only Look Once) object detection algorithm was employed to detect the region of interest containing the nasal septum. In the second stage, 3 convolutional neural network architectures, ResNet, EfficientNet, and MobileNet, were evaluated for classifying CBCT images into NSD and normal categories. Among the YOLOv11 variants, YOLOv11n demonstrated superior performance with a precision of 0.996, a recall of 1.000, an mAP50 of 0.995, and an mAP50-95 of 0.873. For the classification task, Mobile_small emerged as the top-performing model, achieving an area under the curve of 0.817, an area under the precision-recall curve of 0.845, and an accuracy of 0.749. An AI-assisted diagnostic tool was developed based on YOLOv11n and MobileNet models and validated on 50 internal and 50 external CBCT scans. With AI assistance, orthodontists' diagnostic accuracy increased by 20.12% and 21.49%, respectively, whereas average diagnosis time decreased by 23.75 seconds, improving efficiency by 53.92%. The proposed system enables rapid NSD screening with diagnostic-level accuracy, demonstrating the viability of lightweight AI models for clinical CBCT analysis. AI-assisted diagnosis improves orthodontists' accuracy and time efficiency in identifying NSD.
Page 66 of 6276269 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.