Sort by:
Page 75 of 6346332 results

Rahi, A.

medrxiv logopreprintOct 9 2025
Accurate cardiac MRI segmentation is essential for quantitative analysis of cardiac structure and function in clinical practice. In this study, we propose an ensemble framework combining several improved UNet-based architectures to achieve robust and clinically reliable segmentation performance. The ensemble integrates multiple models, including variants of standard UNet, Residual UNet, and Attention UNet, optimized through extensive hyperparameter tuning and data augmentation on the CAMUS subject-based dataset. Experimental results demonstrate that our approach achieves a Dice similarity coefficient of 0.91, surpassing several state- of-the-art methods reported in recent literature. Moreover, the proposed ensemble exhibits exceptional stability across subjects and maintains high generalization performance, indicating its strong potential for real-world clinical deployment. This work highlights the effectiveness of ensemble deep learning techniques for cardiac image segmentation and represents a promising step towards clinical-grade automated analysis in cardiac imaging.

Lou H, Cui S, Dong Y, Liu S, Li S, Song H, Zhao X

pubmed logopapersOct 9 2025
This study aimed to explore the value of a machine learning model based on spectral computed tomography (CT) for predicting the programmed death ligand-1 (PD-L1) expression in resectable non-small cell lung cancer (NSCLC). In this retrospective study, 131 instances of NSCLC who underwent preoperative spectral CT scanning were enrolled and divided into a training cohort (n = 92) and a test cohort (n = 39). Clinical-imaging features and quantitative parameters of spectral CT were analyzed. Variable selection was performed using univariate and multivariate logistic regression, as well as LASSO regression. We used eight machine learning algorithms to construct a PD-L1 expression predictive model. We utilized sensitivity, specificity, accuracy, calibration curve, the area under the curve (AUC), F1 score and decision curve analysis (DCA) to evaluate the predictive value of the model. After variable selection, cavitation, ground-glass opacity, and CT40keV and CT70keV at venous phase were selected to develop eight machine learning models. In the test cohort, the extreme gradient boosting (XGBoost) model achieved the best diagnostic performance (AUC = 0.887, sensitivity = 0.696, specificity = 0.937, accuracy = 0.795 and F1 score = 0.800). The DCA indicated favorable clinical utility, and the calibration curve demonstrated the model's high level of prediction accuracy. Our study indicated that the machine learning model based on spectral CT could effectively evaluate the PD-L1 expression in resectable NSCLC. The XGBoost model, integrating spectral CT quantitative parameters and imaging features, demonstrated considerable potential in predicting PDL1 expression.

Pathmakumara, H. C., Perera, G.

medrxiv logopreprintOct 9 2025
One of the main causes of permanent blindness in the globe, glaucoma frequently advances symptomlessly until it reaches an advanced stage. Recent developments in artificial intelligence (AI) have demonstrated promise in automating glaucoma screening by retinal fundus imaging, which is essential for preventing vision loss. This study uses the publicly accessible ACRIMA dataset to offer a deep learning-based methodology for classifying glaucoma. In order to solve the inherent imbalance in the dataset, the methodology optimizes data augmentation and class balancing while using transfer learning with a DenseNet121 backbone. The model outperformed a number of current techniques with a validation accuracy of 90.16% and a ROC AUC score of 0.976. Grad-CAM and Grad-CAM++ were combined to display decision-critical areas in fundus pictures in order to guarantee clinical interpretability. In accordance with clinical diagnostic procedures, these explainability methodologies verified that the model continuously concentrated on the optic disc and neuroretinal rim. Precision, recall, F1-score, confusion matrix, and ROC analysis were used in a thorough examination. The proposed system shows great promise for practical implementation in environments with limited resources and transportable screening units. Multi-modal imaging integration, data set expansion, and the use of modern explanatory frameworks are examples of future improvements that will further enhance generalizability and clinical reliability.

Wang B, Zhao T, Ma R, Huo X, Xiong X, Wu M, Wang Y, Liu L, Zhuang Z, Wang B, Shou J

pubmed logopapersOct 8 2025
Neuroimaging is crucial in the diagnosis of Alzheimer disease (AD). In recent years, artificial intelligence (AI)-based neuroimaging technology has rapidly developed, providing new methods for accurate diagnosis of AD, but its performance differences still need to be systematically evaluated. This study aims to conduct a systematic review and meta-analysis comparing the diagnostic performance of AI-assisted fluorine-18 fluorodeoxyglucose positron emission tomography (18F-FDG PET) and structural magnetic resonance imaging (sMRI) for AD. Databases including Web of Science, PubMed, and Embase were searched from inception to January 2025 to identify original studies that developed or validated AI models for AD diagnosis using 18F-FDG PET or sMRI. Methodological quality was assessed using the TRIPOD-AI (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis-Artificial Intelligence) checklist. A bivariate mixed-effects model was employed to calculate pooled sensitivity, specificity, and summary receiver operating characteristic curve area (SROC-AUC). A total of 38 studies were included, with 28 moderate-to-high-quality studies analyzed. Pooled SROC-AUC values were 0.94 (95% CI 0.92-0.96) for sMRI and 0.96 (95% CI 0.94-0.98) for 18F-FDG PET, demonstrating statistically significant intermodal differences (P=.02). Subgroup analyses revealed that for machine learning, pooled SROC-AUCs were 0.89 (95% CI 0.86-0.92) for sMRI and 0.95 (95% CI 0.92-0.96) for 18F-FDG PET, while for deep learning, these values were 0.96 (95% CI 0.94-0.97) and 0.97 (95% CI 0.96-0.99), respectively. Meta-regression identified heterogeneity arising from study quality stratification, algorithm types, and validation strategies. Both AI-assisted 18F-FDG PET and sMRI exhibit high diagnostic accuracy in AD, with 18F-FDG PET demonstrating superior overall diagnostic performance compared to sMRI.

Ko DH, Son MH, Kim DI, Um JY

pubmed logopapersOct 8 2025
This paper presents an A-mode ultrasound scanner application-specific integrated circuit (ASIC) for arterial distension monitoring. The ASIC operates with a single-element ultrasound probe, identifying a target artery through echo pattern recognition and reconstructing an arterial diameter waveform. A 1-D convolutional neural network (CNN) is employed to ensure accurate probe positioning by recognizing characteristic arterial wall echo patterns. Additionally, gradient-weighted class activation mapping (Grad-CAM) is utilized to adaptively localize arterial wall regions, facilitating the measurement of arterial diameter in each A-mode frame. The ASIC includes a high-voltage pulser, a transmit/receive (T/R) switch, an analog front-end, and a synthesized digital circuit for post processing. The ASIC has been fabricated in a 180-nm BCD process, occupying an active area of 2.8 mm<sup>2</sup> with a power consumption of 1.65 mW. The fabricated ASIC was evaluated for CNN inference performance and accuracy of arterial distension estimation, achieving a CNN inference accuracy of 95% and a Pearson correlation coefficient (r) of 0.895. Compared to prior ultrasound scanners, the proposed ASIC achieves a high inference accuracy in echo pattern recognition and an efficient implementation of mixed-signal architecture, demonstrating high feasibility of a small footprint ultrasound module for physiological instrumentation.

Park J, Kim NY, Bae HJ, Jeong J, Jang M, Bae SJ, Koh JM, Lee SH, Yoon JH, Lee CH, Kim N

pubmed logopapersOct 8 2025
Low bone mass (LBM), which can lead to osteoporosis, is often undetected and increases the risk of bone fractures. This study presents OsPenScreen, a deep learning model that can identify low bone mass early using standard chest X-rays (CXRs). By detecting low bone mass sooner, this tool helps prevent the disease progression to osteoporosis, potentially reducing health complications and treatment costs. OsPenScreen was validated across four external datasets and consistently performed well, showing its potential as a reliable, cost-effective solution for opportunistic early screening in CXR. Low bone mass, an often-undiagnosed precursor to osteoporosis, significantly increases fracture risk and poses a substantial public health challenge. This study aimed to develop and validate a deep learning model, OsPenScreen, for the opportunistic detection of low bone mass using routine chest X-rays (CXRs). OsPenScreen, a convolutional neural network-based model, was trained on 77,812 paired CXR and dual-energy X-ray absorptiometry (DXA) datasets using knowledge distillation techniques. Validation was performed across four independent datasets (5,935 images) from diverse institutions. The model's performance was assessed using area under the curve (AUC), accuracy, sensitivity, and specificity. Grad-CAM visualizations were employed to analyze model decision-making. Osteoporosis cases were pre-excluded by a separate model; OsPenScreen was applied only to non-osteoporotic cases. Our model achieved an AUC of 0.95 (95% CI: 0.94-0.97) on the external test datasets, with consistent performance across sex and age subgroups. The model demonstrated superior accuracy in detecting cases with significantly reduced bone mass and showed focused attention on weight-bearing bones in normal cases versus non-weight-bearing bones in low bone mass cases. OsPenScreen represents a scalable and effective tool for opportunistic low bone mass screening, utilizing routine CXRs without additional healthcare burdens. Its robust performance across diverse datasets highlights its potential to enhance early detection, preventing progression to osteoporosis and reducing associated healthcare costs.

Aisu Y, Okada T, Itatani Y, Masuo A, Tani R, Fujimoto K, Kido A, Sawada A, Sakai Y, Obama K

pubmed logopapersOct 8 2025
Pelvic anatomy is a complex network of organs that varies between individuals. Understanding the anatomy of individual patients is crucial for precise rectal cancer surgeries. Therefore, developing technology that can allow visualization of anatomy before surgery is necessary. This study aims to develop an auto-segmentation model of pelvic structures using AI technology and to evaluate the accuracy of the model toward preoperative anatomical understanding. Data were collected from 63 male patients who underwent 3D MRI during a preoperative examination for colorectal and urogenital diseases between November 2015 and July 2019 and from 11 healthy male volunteers. Eleven organs and tissues were segmented. The model was developed using a threefold cross-validation process with a total of 59 cases as development data. The accuracy was evaluated with the separately prepared test data using dice similarity coefficient (DSC), true positive rate (TPR), and positive predictive value (PPV) by comparing AI-segmented data with manual-segmented data. The highest value of DSC, TPR, and PPV were 0.927, 0.909, and 0.948 for the internal anal sphincter (including the rectum), respectively. On the other hand, the lowest values were 0.384, 0.772, and 0.263 for the superficial transverse perineal muscle, respectively. While there were differences among organs, the overall quality of automatic segmentation was maintained in our model, suggesting that the morphological characteristics of the organs may influence the accuracy. We developed an auto-segmentation model that can independently delineate soft-tissue structures in the male pelvis using 3D T2-weighted MRIs, providing valuable assistance to doctors in understanding pelvic anatomy.

Liebert A, Schreiter H, Hadler D, Kapsner LA, Ohlmeyer S, Eberle J, Erber R, Emons J, Laun FB, Uder M, Wenkel E, Bickelhaupt S

pubmed logopapersOct 8 2025
Maximum intensity projections (MIPs) facilitate rapid lesion detection both for contrast-enhanced (CE) and diffusion-weighted imaging (DWI) breast magnetic resonance imaging (MRI). We evaluated the feasibility of AI-based virtual CE subtraction MIPs as a reading approach. This Institutional Review Board-approved retrospective study includes 540 multi-parametric breast MRI examinations (performed from 2017 to 2020), including multi-b-value DWI (50, 750, and 1,500 s/mm²). A 2D U-Net was trained using unenhanced (UnE) images as inputs to generate virtual abbreviated CE (VAbCE) subtractions. Two radiologists evaluated lesion suspicion, image quality, and artifacts for UnE, VACE, and abbreviated CE (AbCE) images. Lesion conspicuity was compared between VAbCE and AbCE MIPs. Cancer detection rates for UE, VAbCE, and AbCE MIPs were 90.0%, 91.4%, and 94.3%, respectively. Single-slice reading demonstrated sensitivities of 88.6% (UnE), 91.4% (VAbCE), and 94.3% (AbCE). Inter-rater agreement (Cohen κ) for lesion suspicion scores was higher for VAbCE (0.53) than UnE alone (0.39) and comparable to AbCE (0.58). No significant difference in mean lesion conspicuity was observed for VACE MIPs compared to ACE (p ≥ 0.670). No significant difference could be observed for quality (p ≥ 0.108), and reading time (p = 1.000) between methods. Fewer visually significant artifacts could be observed in VAbCE than in AbCE MIPs (p ≤ 0.001). VAbCE breast MRI improved inter-rater agreement and allowed for slightly improved sensitivity compared to UnE images, while AbCE still provided the overall highest sensitivity. Further research is necessary to investigate the diagnostic potential of VAbCE breast MRI. VAbCE breast MRI generated by neural networks allowed the derivation of MIPs for rapid visual assessment, showing a way for screening applications. Virtual abbreviated contrast-enhanced (VAbCE) MIPs provided comparable sensitivity to MIPs of unenhanced high b-value DWI and were slightly lower than AbCE MIPs. Adding VAbCE to unenhanced high b-value DWI significantly improved interrater agreement for lesion suspicion scoring. Single-slice evaluation of VAbCE MIPs provided a sensitivity comparable to unenhanced high b-value DWI MIPs.

Kelly BS, Lee J, Antram E, Arthurs O, Shelmerdine SC

pubmed logopapersOct 8 2025
Prior publications have highlighted inconsistent labelling of intended use cases and target population for U.S. Food and Drug Administration-approved AI medical devices, especially for children. The extent to which this issue applies to devices within Europe remains unaddressed. A comprehensive review was conducted of all regulatory-approved AI medical devices for use in radiology from a non-profit, publicly available database. Two independent reviewers assessed information about all the devices regarding use case, modality and intended population. A third reviewer resolved any discrepancies. Where the intended population was unclear, a standardised review of the available evidence and marketing materials for the AI device was conducted. Only four (4/213, 2%) AI medical devices were clearly labelled for paediatric use. A further 11% were intended for all ages, including children. The majority (88/213; 41%) of all AI medical devices did not clearly demonstrate their intended population on the database. Further examination of the scientific literature and marketing of these "unclear" devices showed that 6 (6/88, 7%) of these included patients under 18 in their intended target population, but 47% (41/88) still remained unclear after further review. Most regulated radiology AI medical devices have missing or unclear information regarding the appropriate use in children. This poses significant potential risk, including inadvertent off-label use, which could compromise patient safety. The EU AI Act emphasises the need for transparency and accountability in AI device deployment, and we therefore advocate for a standardised paediatric safety indicator to clearly communicate suitability. Question To what extent do European-approved radiology AI devices provide clear labelling of intended use and suitability for paediatric populations? Findings Many radiology AI medical devices in Europe lack explicit paediatric use information, raising concerns about unintended off-label use. Clinical relevance Clear labelling of AI device suitability for children is essential to ensure safe use. A standardised safety indicator could aid clinicians in appropriate device selection.

Duyan Yüksel H, Büyük B, Evlice B

pubmed logopapersOct 8 2025
This study aimed to evaluate the diagnostic performance of machine learning (ML) algorithms based on radiomic features extracted from cone-beam computed tomography (CBCT) images in differentiating the nasopalatine duct (NPD) from the nasopalatine duct cyst (NPDC), and to compare their performance with that of a dentomaxillofacial radiologist. CBCT scans from 101 histopathologically confirmed NPDC cases and 101 age- and sex-matched controls with normal NPD were retrospectively analyzed. Manual segmentation was performed to extract 1037 radiomic features (original, Laplacian of Gaussian, and wavelet-transformed). After dimensionality reduction, five ML models (support vector machine (SVM), random forest (RF), decision tree (DT), k-nearest neighbors (KNN), and logistic regression (LR)) were trained using 5-fold cross-validation. Performance was evaluated using the area under the ROC curve (AUC), sensitivity, specificity, precision, recall, and F1-score. Among the 11 optimal features identified through feature selection, large area high gray level emphasis and zone variance from the gray level size zone matrix (GLSZM) class were the most prominent. SVM achieved the highest performance in the test set (AUC and all other metrics = 1.00). The radiologist showed comparable but slightly lower overall performance than SVM (AUC = 0.94, with other metrics between 0.93 and 0.95). Machine learning algorithms based on radiomic features extracted from CBCT images can effectively differentiate NPD from NPDC. Unlike standard visual interpretation, this approach analyzes quantitative image features via mathematical models, yielding objective and reproducible results. It may serve as a non-invasive, complementary decision-support tool, particularly in diagnostically challenging cases.
Page 75 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.