Sort by:
Page 51 of 6186173 results

Zhuang J, Chen Y, Zheng W, Lian G, Zhu Y, Zhang H, Fan X, Fu F, Ye Q

pubmed logopapersOct 11 2025
To investigate whether ultrasound-based radiomic features can be used for the prediction of human epidermal growth factor receptor 2 (HER2) expression. This study retrospectively analyzed the pre-operative ultrasound data of 113 patients with urothelial carcinoma of the bladder who were classified into training (n = 67) and test (n = 46) sets. Least absolute shrinkage and selection operator (LASSO) regression was applied to identify the most discriminative radiomic features for evaluating HER2 status and seven radiomics-based machine learning models were developed. The discriminative performance of the models was evaluated using metrics including area under the receiver operator characteristic curves (AUROCs). A nomogram based on logistic regression was established to visualize the predictive model combining clinical and radiomic signatures. Ultimately, seven radiomic features for HER2 status prediction were identified, six of which were derived from the wavelet images. Shapley Additive exPlanations analysis revealed that wavelet_LHH_glcm_MCC had the highest weight in predicting HER2 expression. All of the radiomics-based prediction models achieved an area under the curve of more than 0.72 in the test set. The combining nomogram exhibited areas under the curve of 0.827 (95% CI: 0.723-0.931) in the training set and 0.784 (95% CI: 0.616-0.953) in the test set, respectively. Ultrasound-based radiomic features, especially the wavelet transform-based texture features, show potential for non-invasive HER2 status classification in urothelial carcinoma of the bladder.

Zhang H, Liu K, Ding Y, Li H, Liang J, Yu H, Yin K

pubmed logopapersOct 11 2025
Accurate preoperative classification of pulmonary nodules (PNs) is critical for guiding clinical decision-making and preventing overtreatment. This study aims to evaluate the predictive performance of artificial intelligence (AI)-based quantitative computed tomography (CT) feature analysis in differentiating among four pathological types of PNs: atypical adenomatous hyperplasia and adenocarcinoma in situ (AAH + AIS), minimally invasive adenocarcinoma (MIA), invasive adenocarcinoma (IAC), and lung inflammatory nodules (IN). A total of 462 pathologically confirmed PNs were analyzed. Radiomic features, including CT attenuation metrics, 3D morphometrics, and texture parameters such as entropy and skewness, were extracted using a deep learning-based AI platform. Logistic regression models were constructed using both single- and multi-variable strategies to evaluate the classification accuracy of these features. Moreover, the inclusion of IN as a separate category significantly enhanced the clinical utility of AI in differentiating benign mimickers from malignant nodules. The combined model, which integrated AI-derived features with traditional CT signs, was used to assess the diagnostic performance of the radiomic features in differentiating the four pathological types of nodules. The combined model demonstrated superior diagnostic performance, with area under the curve (AUC) values of 0.936 for IAC, 0.884 for AAH + AIS, and 0.865 for IN. Although MIA showed lower classification accuracy (AUC = 0.707), key features such as entropy, solid component ratio, and total volume effectively distinguished invasive from non-invasive lesions. These findings highlight the potential of AI-enhanced radiomics for supporting non-invasive, objective, and individualized diagnosis of PNs. Question Can artificial intelligence (AI)-based quantitative CT analysis reliably differentiate benign inflammatory nodules from the spectrum of lung adenocarcinoma subtypes, a common diagnostic challenge? Findings An integrated model combining AI-driven radiomic features and traditional CT signs demonstrated high accuracy in differentiating invasive adenocarcinoma (AUC = 0.936), pre-invasive lesions (AUC = 0.884), and inflammatory nodules (AUC = 0.865). Clinical relevance AI-enhanced radiomics provides a non-invasive, objective tool to improve preoperative risk stratification of pulmonary nodules, potentially guiding personalized management and reducing unnecessary surgeries for benign inflammatory lesions that mimic malignancy.

Chaudhary R, Chaudhary P, Singh C, Kumar K, Singh S, Arora R, Kaur S, Vaarshney D, Acharya P, Mishra U

pubmed logopapersOct 11 2025
This study investigates the efficacy of advanced deep learning techniques, specifically convolutional neural network (CNN) (U-Net) and single-shot multibox detector (SSD), in enhancing the early detection of brain tumors, thereby facilitating timely medical intervention. Accurate brain tumor detection is paramount in medical image analysis as it involves the precise identification and localization of abnormal growths within the brain. Conventional diagnostic approaches often rely on manual analysis conducted by radiologists, which are susceptible to human error and influenced by variability in tumor size, shape, and location. In our research, we leverage U-Net, a CNN widely recognized for its effectiveness in medical image segmentation, alongside SSD, an established object detection algorithm. The results indicate that the U-Net model achieved an impressive accuracy of 97.73%, demonstrating a high level of effectiveness in segmenting brain tumors with exceptional precision. Conversely, the SSD model secured an accuracy of 58%, which, while comparatively lower, suggests that it may still serve as a valuable supplementary tool in specific scenarios and for broader applications in identifying tumor regions within medical scans. Our findings illuminate the potential of utilizing U-Net for high-precision brain tumor detection, reinforcing its position as a leading method in medical imaging. Overall, the study reinforces the important role of deep learning methods in improving early detection outcomes in neuro-oncology and highlights avenues for further exploration in enhancing diagnostic accuracy.

Peters B, Symons R, Oulkadi S, Van Breda A, Bataille Y, Kayaert P, Dewilde W, De Wilder K, Franssen WMA, Nchimi A, Ghekiere O

pubmed logopapersOct 11 2025
Fractional flow reserve (FFR) and instantaneous wave-Free Ratio (iFR) pressure measurements during invasive coronary angiography (ICA) are the gold standard for assessing vessel-specific ischemia. Artificial intelligence has emerged to compute FFR based on coronary computed tomography angiography (CCTA) images (CT-FFR<sub>AI</sub>). We assessed a CT-FFR<sub>AI</sub> deep learning model for the prediction of vessel-specific ischemia compared to invasive FFR/iFR measurements. We retrospectively selected 322 vessels from 275 patients at two centers who underwent CCTA and invasive FFR and/or iFR measurements during ICA within three months. A junior and senior radiologist at each center supervised vessel centerline-building to generate curvilinear reformats that were processed for CT-FFR<sub>AI</sub> binary outcomes (≤ 0.80 or > 0.80) prediction. Reliability for CT-FFR<sub>AI</sub> outcomes based on radiologists' supervision was assessed with Cohen's κ. Diagnostic values of CT-FFR<sub>AI</sub> were calculated using invasive FFR ≤ 0.80 (n = 224) and invasive iFR ≤ 0.89 (n = 238) as the gold standard. A multinomial logistic regression model, including all false-positive and false-negative cases, assessed the impact of patient- and CCTA-related factors on diagnostic values of CT-FFR<sub>AI</sub>. Concordance for CT-FFR<sub>AI</sub> binary outcomes was substantial (κ = 0.725, p < 0.001). Sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy of CT-FFR<sub>AI</sub> in predicting vessel-specific ischemia on a per-vessel analysis, based on senior radiologists' evaluations, were 85% (58/68) and 91% (78/86), 82% (128/156) and 78% (119/152), 67% (58/86) and 70% (78/111), 93% (128/138) and 94% (119/127), and 83% (186/224) and 83% (197/238), respectively. Coronary calcifications significantly reduced the diagnostic accuracy of CT-FFR<sub>AI</sub> (p < 0.001; OR, 1.002; 95% CI 1.001-1.003). CT-FFR<sub>AI</sub> demonstrates high diagnostic performance in predicting vessel-specific coronary ischemia compared to invasive FFR and iFR. Coronary calcifications negatively affect specificity, suggesting that further improvements in spatial resolution could enhance accuracy. Question How accurately can a new deep learning model (CT-FFR<sub>AI</sub>) assess vessel-specific ischemia from CCTA non-invasively compared to two validated pressure measurements during invasive coronary angiography? Findings CT-FFR<sub>AI</sub> achieved high diagnostic accuracy in predicting vessel-specific ischemia, with high sensitivity and negative predictive value, independent of scanner type and radiologists' experience. Clinical relevance CT-FFR<sub>AI</sub> provides a non-invasive alternative to Fractional Flow Reserve and instantaneous wave-Free Ratio measurements during invasive coronary angiography for detecting vessel-specific ischemia, potentially reducing the need for invasive procedures, lowering healthcare costs, and improving patient safety.

Mohamed Hamad, Muhammad Khan, Tamer Khattab, Mohamed Mabrok

arxiv logopreprintOct 11 2025
A key challenge in ischemic stroke diagnosis using medical imaging is the accurate localization of the occluded vessel. Current machine learning methods in focus primarily on lesion segmentation, with limited work on vessel localization. In this study, we introduce Stroke Locus Net, an end-to-end deep learning pipeline for detection, segmentation, and occluded vessel localization using only MRI scans. The proposed system combines a segmentation branch using nnUNet for lesion detection with an arterial atlas for vessel mapping and identification, and a generation branch using pGAN to synthesize MRA images from MRI. Our implementation demonstrates promising results in localizing occluded vessels on stroke-affected T1 MRI scans, with potential for faster and more informed stroke diagnosis.

Jack Krolik, Jake Lynn, John Henry Rudden, Dmytro Vremenko

arxiv logopreprintOct 11 2025
This study explores the application of deep learning techniques in the automated detection and segmentation of brain tumors from MRI scans. We employ several machine learning models, including basic logistic regression, Convolutional Neural Networks (CNNs), and Residual Networks (ResNet) to classify brain tumors effectively. Additionally, we investigate the use of U-Net for semantic segmentation and EfficientDet for anchor-based object detection to enhance the localization and identification of tumors. Our results demonstrate promising improvements in the accuracy and efficiency of brain tumor diagnostics, underscoring the potential of deep learning in medical imaging and its significance in improving clinical outcomes.

Yuxiang Lai, Jike Zhong, Ming Li, Yuheng Li, Xiaofeng Yang

arxiv logopreprintOct 11 2025
Recent advances in large generative models have shown that simple autoregressive formulations, when scaled appropriately, can exhibit strong zero-shot generalization across domains. Motivated by this trend, we investigate whether autoregressive video modeling principles can be directly applied to medical imaging tasks, despite the model never being trained on medical data. Specifically, we evaluate a large vision model (LVM) in a zero-shot setting across four representative tasks: organ segmentation, denoising, super-resolution, and motion prediction. Remarkably, even without domain-specific fine-tuning, the LVM can delineate anatomical structures in CT scans and achieve competitive performance on segmentation, denoising, and super-resolution. Most notably, in radiotherapy motion prediction, the model forecasts future 3D CT phases directly from prior phases of a 4D CT scan, producing anatomically consistent predictions that capture patient-specific respiratory dynamics with realistic temporal coherence. We evaluate the LVM on 4D CT data from 122 patients, totaling over 1,820 3D CT volumes. Despite no prior exposure to medical data, the model achieves strong performance across all tasks and surpasses specialized DVF-based and generative baselines in motion prediction, achieving state-of-the-art spatial accuracy. These findings reveal the emergence of zero-shot capabilities in medical video modeling and highlight the potential of general-purpose video models to serve as unified learners and reasoners laying the groundwork for future medical foundation models built on video models.

Salma J. Ahmed, Emad A. Mohammed, Azam Asilian Bidgoli

arxiv logopreprintOct 11 2025
Image segmentation, the process of dividing images into meaningful regions, is critical in medical applications for accurate diagnosis, treatment planning, and disease monitoring. Although manual segmentation by healthcare professionals produces precise outcomes, it is time-consuming, costly, and prone to variability due to differences in human expertise. Artificial intelligence (AI)-based methods have been developed to address these limitations by automating segmentation tasks; however, they often require large, annotated datasets that are rarely available in practice and frequently struggle to generalize across diverse imaging conditions due to inter-patient variability and rare pathological cases. In this paper, we propose Joint Retrieval Augmented Segmentation (J-RAS), a joint training method for guided image segmentation that integrates a segmentation model with a retrieval model. Both models are jointly optimized, enabling the segmentation model to leverage retrieved image-mask pairs to enrich its anatomical understanding, while the retrieval model learns segmentation-relevant features beyond simple visual similarity. This joint optimization ensures that retrieval actively contributes meaningful contextual cues to guide boundary delineation, thereby enhancing the overall segmentation performance. We validate J-RAS across multiple segmentation backbones, including U-Net, TransUNet, SAM, and SegFormer, on two benchmark datasets: ACDC and M&Ms, demonstrating consistent improvements. For example, on the ACDC dataset, SegFormer without J-RAS achieves a mean Dice score of 0.8708$\pm$0.042 and a mean Hausdorff Distance (HD) of 1.8130$\pm$2.49, whereas with J-RAS, the performance improves substantially to a mean Dice score of 0.9115$\pm$0.031 and a mean HD of 1.1489$\pm$0.30. These results highlight the method's effectiveness and its generalizability across architectures and datasets.

Cristiano Patrício, Luís F. Teixeira, João C. Neves

arxiv logopreprintOct 11 2025
Concept-based models aim to explain model decisions with human-understandable concepts. However, most existing approaches treat concepts as numerical attributes, without providing complementary visual explanations that could localize the predicted concepts. This limits their utility in real-world applications and particularly in high-stakes scenarios, such as medical use-cases. This paper proposes ViConEx-Med, a novel transformer-based framework for visual concept explainability, which introduces multi-concept learnable tokens to jointly predict and localize visual concepts. By leveraging specialized attention layers for processing visual and text-based concept tokens, our method produces concept-level localization maps while maintaining high predictive accuracy. Experiments on both synthetic and real-world medical datasets demonstrate that ViConEx-Med outperforms prior concept-based models and achieves competitive performance with black-box models in terms of both concept detection and localization precision. Our results suggest a promising direction for building inherently interpretable models grounded in visual concepts. Code is publicly available at https://github.com/CristianoPatricio/viconex-med.

Agampreet Aulakh, Nils D. Forkert, Matthias Wilms

arxiv logopreprintOct 11 2025
The human brain undergoes dynamic, potentially pathology-driven, structural changes throughout a lifespan. Longitudinal Magnetic Resonance Imaging (MRI) and other neuroimaging data are valuable for characterizing trajectories of change associated with typical and atypical aging. However, the analysis of such data is highly challenging given their discrete nature with different spatial and temporal image sampling patterns within individuals and across populations. This leads to computational problems for most traditional deep learning methods that cannot represent the underlying continuous biological process. To address these limitations, we present a new, fully data-driven method for representing aging trajectories across the entire brain by modelling subject-specific longitudinal T1-weighted MRI data as continuous functions using Implicit Neural Representations (INRs). Therefore, we introduce a novel INR architecture capable of partially disentangling spatial and temporal trajectory parameters and design an efficient framework that directly operates on the INRs' parameter space to classify brain aging trajectories. To evaluate our method in a controlled data environment, we develop a biologically grounded trajectory simulation and generate T1-weighted 3D MRI data for 450 healthy and dementia-like subjects at regularly and irregularly sampled timepoints. In the more realistic irregular sampling experiment, our INR-based method achieves 81.3% accuracy for the brain aging trajectory classification task, outperforming a standard deep learning baseline model (73.7%).
Page 51 of 6186173 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.