Sort by:
Page 63 of 6256241 results

Gwizdala J, Salihu A, Senouf O, Meier D, Rotzinger D, Qanadli S, Muller O, Frossard P, Abbe E, Thanou D, Fournier S, Auberson D

pubmed logopapersOct 9 2025
Non-ST-elevation acute coronary syndrome (NSTE-ACS) remains a diagnostic challenge, as a proportion of patients do not present with obstructive coronary lesions. Coronary computed tomography angiography (CCTA) has emerged as a non-invasive tool for coronary assessment, and integrating artificial intelligence (AI) may enhance its diagnostic accuracy. This study evaluates a machine learning (ML) model using a learned fusion approach to identify culprit lesions in high-risk NSTE-ACS patients. This study is a sub-analysis of a prospective, multicenter trial including patients with high-risk NSTE-ACS who underwent CCTA, followed by ICA and fractional flow reserve (FFR) assessment in every intermediate stenosis. An ML framework was developed to analyze 2 orthogonal CCTA views of each coronary segment and classify them as culprit or non-culprit, with ICA +/- FFR as gold standards. The model was trained using 5-fold cross-validation and compared against 5 baseline methods, including conventional feature extraction and FFR-CT. Among 80 patients, 514 coronary segments were analyzed, with 63 (12.3%) labeled as culprit. The learned fusion model achieved a sensitivity of 0.55 ± 0.14, specificity of 0.93 ± 0.05, and F1-score of 0.53 ± 0.11. The AUC was 0.84 ± 0.06, matching the performance of FFR-CT (AUC of 0.82 ± 0.08). Our findings demonstrate that the learned fusion approach, based on combining two orthogonal views, achieved a performance level comparable to that of FFR-CT, as shown by the AUC of both techniques. These results confirm that AI-driven CCTA analysis could enhance clinical decision-making in high-risk NSTE-ACS patients, warranting further validation of this method in larger cohorts.

Iima M, Saida T, Yamada Y, Kurokawa R, Ueda D, Honda M, Nishioka K, Ito R, Sofue K, Naganawa S

pubmed logopapersOct 9 2025
This review provides a comprehensive overview of recent transformative advancements in diagnostic imaging that position Japan at the forefront of radiological innovation. We highlight pivotal innovations that enhance diagnostic capabilities and redefine clinical workflows. The article begins with upright multidetector computed tomography (MDCT), a groundbreaking technology offering novel insights into posture-dependent anatomical and physiological changes. We then explore significant progress in breast and gynecologic imaging, including advancements in artificial intelligence computer-aided (AI-CAD) synthesized mammograms, automated breast ultrasound (ABUS), and abbreviated MRI protocols. These innovations address unique regional challenges in early cancer detection. Significant innovations in abdominal radiology, spanning advanced CT (including photon-counting detector CT), accelerated MRI, and AI applications, are also discussed. The review further delves into glymphatic system research, where advanced MRI techniques, particularly DTI-ALPS, are unraveling new insights into brain waste clearance and neurological disorders. Finally, we examine the future of Japanese radiology through the lens of AI, with a focus on Large Language Models (LLMs). We discuss their growing role in diagnostic support, report generation, and information extraction, as well as important societal and ethical considerations. These collective advancements underscore Japan's dynamic contributions to radiological innovation, poised to significantly impact global healthcare practices by improving disease detection, optimizing workflows, and extending healthy life expectancy in an aging society.

Su TY, Hu S, Wang X, Adler S, Wagstyl K, Ding Z, Choi JY, Sakaie K, Blümcke I, Murakami H, Alexopoulos AV, Jones SE, Najm I, Ma D, Wang ZI

pubmed logopapersOct 9 2025
This study was undertaken to develop a framework for focal cortical dysplasia (FCD) detection using surface-based morphometric (SBM) analysis and machine learning (ML) applied to three-dimensional (3D) magnetic resonance fingerprinting (MRF). We included 114 subjects (44 patients with medically intractable focal epilepsy and FCD, 70 healthy controls [HCs]). All subjects underwent high-resolution 3-T MRF scans generating T1 and T2 maps. All patients had clinical T1-weighted (T1w) images; 35 also had 3D fluid-attenuated inversion recovery (FLAIR). A 3D region of interest (ROI) was manually created for each lesion. All maps/images and lesion ROIs were registered to T1w images. Surface-based features were extracted following the Multi-center Epilepsy Lesion Detection pipeline. Features were normalized using intrasubject, interhemispheric, and intersubject z-scoring. A two-stage ML approach was applied: a vertexwise neural network classifier for lesional versus normal vertices using T1w/MRF/FLAIR features, followed by a clusterwise Random Undersampling Boosting classifier to suppress false positives (FPs) based on cluster size, prediction probabilities, and feature statistics. Leave-one-out cross-validation was performed at both stages. Using T1w features, sensitivity was 70.4% with 11.6 FP clusters/patient and 4.1 in HCs. Adding MRF reduced FPs to 6.6 clusters/patient and 1.5 in HCs, with 68.2% sensitivity. Combining T1w, MRF, and FLAIR achieved 71.4% sensitivity, with 4.7 FPs/patient and 1.1 in HCs. Detection probabilities were significantly higher for true positive clusters than FPs (p < .001). Type II showed higher detection rates than non-type II. Magnetic resonance imaging (MRI)-positive patients showed higher detection rates and fewer FPs than MRI-negative patients. Seizure-free patients demonstrated higher detection rates than non-seizure-free patients. Subtyping accuracy was 80.8% for non-type II versus type II, and 68.4% for IIa versus IIb, although limited by small sample size. The transmantle sign was present in 61.5% of IIb and 40% of IIa cases. We developed an ML framework for FCD detection integrating SBM with clinical MRI and MRF. Advances include improved FP control and enhanced subtyping; selected model outputs may provide indicators of detection confidence and seizure outcome.

Jacquemyn X, Van den Eynde J, Rao S, Kutty S

pubmed logopapersOct 9 2025
Explore the clinical progression, diagnostic challenges, and evolving treatments of systemic right ventricular (SRV) failure, highlighting key gaps and advances. Recent evidence highlights the distinct pathophysiology of SRV failure and limited efficacy of conventional heart failure (HF) treatments. Emerging drugs like SGLT2 inhibitors are being studied for modulating ventricular remodeling and fibrosis. Echocardiography, enhanced by speckle-tracking and 3D imaging, is first-line, while cardiac MRI remains the gold standard for volumetric, functional, and tissue characterization. SRV-specific machine learning models improve prognostication and personalized care. Advances in transcatheter tricuspid valve interventions offer less invasive options for high-risk patients. In end-stage SRV failure, ventricular assist devices effectively unload the ventricle, enhance transplant candidacy, may be combined with tricuspid procedures, and are increasingly used as long-term destination therapy. SRV failure is a unique condition requiring personalized, multidisciplinary management, with advances in risk stratification and treatments shaping future care.

Wang S, Meng T, Peng L, Zeng Q

pubmed logopapersOct 9 2025
To investigate the potential feasibility of ultra-low-dose (ULD) liver CT with the artificial intelligence iterative reconstruction (AIIR). Sixty-five patients who underwent triphasic contrast-enhanced liver CT were prospectively enrolled. Low tube voltage (80/100 kV) and tube current (35 to 78 mAs) were set in both portal venous phase (PVP) and delayed phase (DP). For each phase, an ULD acquisition (1.11 to 2.50 mGy) was taken followed immediately by a routine-dose (RD) acquisition (11.71 to 19.73 mGy). RD images were reconstructed with a hybrid iterative reconstruction algorithm (RD-HIR), while ULD images were reconstructed with both HIR (ULD-HIR) and AIIR (ULD-AIIR). The noise power spectrum (NPS) noise magnitude, average NPS spatial frequency, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were calculated for the quantitative assessment. Qualitative assessment was performed by 2 radiologists who independently scored the images for diagnostic acceptance. In addition, the radiologists identified focal lesions and characterized noncystic lesions as benign or malignant with both RD and ULD liver CT. Among the enrolled patients (mean age: 58.6±12.9 y, 35 men), 234 lesions with a mean size of 1.27±1.56 cm were identified. In both phases, ULD-AIIR showed comparable NPS noise magnitude with RD-HIR (all P>0.017), and lower NPS noise than ULD-HIR (all P<0.001). Average NPS spatial frequency, SNR, and CNR were highest with ULD-AIIR, followed by RD-HIR and ULD-HIR (all P<0.001). ULD-AIIR showed comparable diagnostic acceptance scores with RD-HIR, while ULD-HIR failed to meet the diagnostic acceptance requirements. RD-HIR and ULD-AIIR achieved comparable detection rate (99.6% vs. 99.1%) and area under curve (AUC) of the receiver operating characteristic curve (ROC) in classifying benign (n=46) and malignant (n=58) noncystic lesions (0.98 vs. 0.97, P=0.3). With AIIR, it is potentially feasible to achieve ULD liver CT (60% dose reduction) while preserving the image and diagnostic quality.

Cheng F, Lin G, Chen W, Chen Y, Zhou R, Yang J, Zhou B, Chen M, Ji J

pubmed logopapersOct 9 2025
This study aimed to develop and validate a machine learning-based computed tomography (CT) radiomics method to preoperatively predict the presence of central lymph node metastasis (CLNM) in patients with papillary thyroid microcarcinoma (PTMC). A total of 921 patients with histopathologically proven PTMC from three medical centers were included in this retrospective study and divided into training, internal validation, external test 1, and external test 2 sets. Radiomics features of thyroid tumors were extracted from CT images and selected for dimensional reduction. Five machine learning classifiers were applied, and the best classifier was selected to calculate radiomics scores (rad-scores). Then, the rad-scores and clinical factors were combined to construct a nomogram model. In the four sets, 35.18% (324/921) patients were CLNM+. The XGBoost classifier showed the best performance, with the highest average area under the curve (AUC) of 0.756 in the validation set. The nomogram model incorporating XGBoost-based rad-scores with age and sex showed better performance than the clinical model in the training [AUC: 0.847(0.809-0.879) vs. 0.706(0.660-0.748)], internal validation [AUC: 0.773(0.682-0.847) vs. 0.671(0.575-0.758)], external test 1 [AUC: 0.807(0.757-0.852) vs. 0.639(0.580-0.695)], and external test 2 [AUC: 0.746(0.645-0.830) vs. 0.608(0.502-0.707)] sets. Furthermore, the nomogram showed better clinical benefit than the clinical and radiomics models. The nomogram model based on the XGBoost classifier exhibited favorable performance. This model provides a potential approach for the non-invasive diagnosis of CLNM in patients with PTMC. This study developed a potential surrogate of preoperative accurate evaluation of CLNM status, which is non-invasive and easy-to-use.

Riadh Bouslimi, Houda Trabelsi, Wahiba Ben Abdssalem Karaa, Hana Hedhli

arxiv logopreprintOct 9 2025
Traumatic brain injuries present significant diagnostic challenges in emergency medicine, where the timely interpretation of medical images is crucial for patient outcomes. In this paper, we propose a novel AI-based approach for automatic radiology report generation tailored to cranial trauma cases. Our model integrates an AC-BiFPN with a Transformer architecture to capture and process complex medical imaging data such as CT and MRI scans. The AC-BiFPN extracts multi-scale features, enabling the detection of intricate anomalies like intracranial hemorrhages, while the Transformer generates coherent, contextually relevant diagnostic reports by modeling long-range dependencies. We evaluate the performance of our model on the RSNA Intracranial Hemorrhage Detection dataset, where it outperforms traditional CNN-based models in both diagnostic accuracy and report generation. This solution not only supports radiologists in high-pressure environments but also provides a powerful educational tool for trainee physicians, offering real-time feedback and enhancing their learning experience. Our findings demonstrate the potential of combining advanced feature extraction with transformer-based text generation to improve clinical decision-making in the diagnosis of traumatic brain injuries.

Eirik A. Østmo, Kristoffer K. Wickstrøm, Keyur Radiya, Michael C. Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen

arxiv logopreprintOct 9 2025
Contrast-enhanced Computed Tomography (CT) is important for diagnosis and treatment planning for various medical conditions. Deep learning (DL) based segmentation models may enable automated medical image analysis for detecting and delineating tumors in CT images, thereby reducing clinicians' workload. Achieving generalization capabilities in limited data domains, such as radiology, requires modern DL models to be trained with image augmentation. However, naively applying augmentation methods developed for natural images to CT scans often disregards the nature of the CT modality, where the intensities measure Hounsfield Units (HU) and have important physical meaning. This paper challenges the use of such intensity augmentations for CT imaging and shows that they may lead to artifacts and poor generalization. To mitigate this, we propose a CT-specific augmentation technique, called Random windowing, that exploits the available HU distribution of intensities in CT images. Random windowing encourages robustness to contrast-enhancement and significantly increases model performance on challenging images with poor contrast or timing. We perform ablations and analysis of our method on multiple datasets, and compare to, and outperform, state-of-the-art alternatives, while focusing on the challenge of liver tumor segmentation.

Ming Jie Ong, Sze Yinn Ung, Sim Kuan Goh, Jimmy Y. Zhong

arxiv logopreprintOct 9 2025
The current study investigated the use of Explainable Artificial Intelligence (XAI) to improve the accuracy of brain tumor segmentation in MRI images, with the goal of assisting physicians in clinical decision-making. The study focused on applying UNet models for brain tumor segmentation and using the XAI techniques of Gradient-weighted Class Activation Mapping (Grad-CAM) and attention-based visualization to enhance the understanding of these models. Three deep learning models - UNet, Residual UNet (ResUNet), and Attention UNet (AttUNet) - were evaluated to identify the best-performing model. XAI was employed with the aims of clarifying model decisions and increasing physicians' trust in these models. We compared the performance of two UNet variants (ResUNet and AttUNet) with the conventional UNet in segmenting brain tumors from the BraTS2020 public dataset and analyzed model predictions with Grad-CAM and attention-based visualization. Using the latest computer hardware, we trained and validated each model using the Adam optimizer and assessed their performance with respect to: (i) training, validation, and inference times, (ii) segmentation similarity coefficients and loss functions, and (iii) classification performance. Notably, during the final testing phase, ResUNet outperformed the other models with respect to Dice and Jaccard similarity scores, as well as accuracy, recall, and F1 scores. Grad-CAM provided visuospatial insights into the tumor subregions each UNet model focused on while attention-based visualization provided valuable insights into the working mechanisms of AttUNet's attention modules. These results demonstrated ResUNet as the best-performing model and we conclude by recommending its use for automated brain tumor segmentation in future clinical assessments. Our source code and checkpoint are available at https://github.com/ethanong98/MultiModel-XAI-Brats2020

Pranav Sambhu, Om Guin, Madhav Sambhu, Jinho Cha

arxiv logopreprintOct 9 2025
This study evaluates whether integrating curriculum learning with diffusion-based synthetic augmentation can enhance the detection of difficult pulmonary nodules in chest radiographs, particularly those with low size, brightness, and contrast, which often challenge conventional AI models due to data imbalance and limited annotation. A Faster R-CNN with a Feature Pyramid Network (FPN) backbone was trained on a hybrid dataset comprising expert-labeled NODE21 (1,213 patients; 52.4 percent male; mean age 63.2 +/- 11.5 years), VinDr-CXR, CheXpert, and 11,206 DDPM-generated synthetic images. Difficulty scores based on size, brightness, and contrast guided curriculum learning. Performance was compared to a non-curriculum baseline using mean average precision (mAP), Dice score, and area under the curve (AUC). Statistical tests included bootstrapped confidence intervals, DeLong tests, and paired t-tests. The curriculum model achieved a mean AUC of 0.95 versus 0.89 for the baseline (p < 0.001), with improvements in sensitivity (70 percent vs. 48 percent) and accuracy (82 percent vs. 70 percent). Stratified analysis demonstrated consistent gains across all difficulty bins (Easy to Very Hard). Grad-CAM visualizations confirmed more anatomically focused attention under curriculum learning. These results suggest that curriculum-guided synthetic augmentation enhances model robustness and generalization for pulmonary nodule detection.
Page 63 of 6256241 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.