Sort by:
Page 240 of 6576562 results

Jiang Z, Spielvogel C, Haberl D, Yu J, Krisch M, Szakall S, Molnar P, Fillinger J, Horvath L, Renyi-Vamos F, Aigner C, Dome B, Lang C, Megyesfalvi Z, Kenner L, Hacker M

pubmed logopapersAug 21 2025
Accurate non-invasive prediction of histopathologic invasiveness and recurrence risk remains a clinical challenge in resectable non-small cell lung cancer (NSCLC). We developed and validated the Edge Proximity Score (EPS), a novel [<sup>18</sup>F]FDG PET/CT-based spatial imaging feature that quantifies the displacement of SUVmax relative to the tumor centroid and perimeter, to assess tumor aggressiveness and predict progression-free survival (PFS). This retrospective study included 244 NSCLC patients with preoperative [<sup>18</sup>F]FDG PET/CT. EPS was computed from normalized SUVmax-to-centroid and SUVmax-to-perimeter distances. A total of 115 PET radiomics features were extracted and standardized. Eight machine learning models (80:20 split) were trained to predict lymphovascular invasion (LVI), visceral pleural invasion (VPI), and spread through air spaces (STAS), with feature importance assessed using SHAP. Prognostic analysis was conducted using multivariable Cox regression. A survival prediction model incorporating EPS was externally validated in the TCIA cohort. RNA sequencing data from 76 TCIA patients were used for transcriptomic and immune profiling. EPS was significantly elevated in tumors with LVI, VPI, and STAS (P < 0.001), consistently ranked among the top SHAP features, and was an independent predictor of PFS (HR = 2.667, P = 0.015). The EPS-based nomogram achieved AUCs of 0.67, 0.70, and 0.68 for predicting 1-, 3-, and 5-year PFS in the TCIA validation cohort. High EPS was associated with proliferative and metabolic gene signatures, whereas low EPS was linked to immune activation and neutrophil infiltration. EPS is a biologically relevant, non-invasive imaging biomarker that may improve risk stratification in NSCLC.

Kim HJ, Kim HH, Eom HJ, Choi WJ, Chae EY, Shin HJ, Cha JH

pubmed logopapersAug 21 2025
To evaluate triaging scenarios involving two commercial AI algorithms to enhance mammography interpretation and reduce workload. A total of 3012 screening or diagnostic mammograms, including 213 cancer cases, were analyzed using two AI algorithms (AI-1, AI-2) and categorized as "high-risk" (top 10%), "minimal-risk" (bottom 20%), or "indeterminate" based on malignancy likelihood. Five triaging scenarios of combined AI use (Sensitive, Specific, Conservative, Sequential Modes A and B) determined whether cases would be autonomously recalled, classified as negative, or referred for radiologist interpretation. Sensitivity, specificity, number of mammograms requiring review, and abnormal interpretation rate (AIR) were compared against single AIs and manual reading using McNemar's test. Sensitive Mode achieved 84% sensitivity, outperforming single AI (p = 0.03 [AI-1], 0.01 [AI-2]) and manual reading (p = 0.03), with an 18.3% reduction in mammograms requiring review (AIR, 23.3%). Specific Mode achieved 87.7% specificity, exceeding single AI (p < 0.001 [AI-1, AI-2]) and comparable to manual reading (p = 0.37), with a 41.7% reduction in mammograms requiring review (AIR, 17%). Conservative and Sequential Modes A and B achieved sensitivities of 82.2%, 80.8%, and 80.3%, respectively, comparable to single AI or manual reading (p > 0.05, all), with reductions of 9.8%, 49.8%, and 49.8% in mammograms requiring review (AIRs, 18.6%, 21.6%, 21.7%). Combining two AI algorithms improved sensitivity or specificity in mammography interpretation while reducing mammograms requiring radiologist review in this cancer-enriched dataset from a tertiary center. Scenario selection should consider clinical needs and requires validation in a screening population. Question AI algorithms have the potential to improve workflow efficiency by triaging mammograms. Combining algorithms trained under different conditions may offer synergistic benefits. Findings The combined use of two commercial AI algorithms for triaging mammograms improved sensitivity or specificity, depending on the scenario, while also reducing mammograms requiring radiologist review. Clinical relevance Integrating two commercial AI algorithms could enhance mammography interpretation over using a single AI for triaging or manual reading.

Li X, Wang S, Kahlert UD, Zhou T, Xu K, Shi W, Yan X

pubmed logopapersAug 21 2025
<b><i>Background:</i></b> Colon cancer is a heterogeneous disease, and rare subtypes like triphasic colon cancer are difficult to detect with standard methods. Artificial intelligence (AI)-driven ultrasound combined with genomic analysis offers a promising approach to improve subtype identification and uncover molecular mechanisms. <b><i>Methods:</i></b> The authors used an AI-driven ultrasound model to identify rare triphasic colon cancer, characterized by a mix of epithelial, mesenchymal, and proliferative components. The molecular features were validated using immunohistochemistry, targeting classical epithelial markers, mesenchymal markers, and proliferation indices. Subsequently, ultrasound genomic techniques were applied to map transcriptomic alterations in conventional colon cancer onto ultrasound images. Differentially expressed genes were identified using the <i>edgeR</i> package. Pearson correlation analysis was performed to assess the relationship between imaging features and molecular markers. <b><i>Results:</i></b> The AI-driven ultrasound model successfully identified rare triphasic features in colon cancer. These imaging features showed significant correlation with immunohistochemical expression of epithelial markers, mesenchymal markers, and proliferation index. Moreover, ultrasound genomic techniques revealed that multiple oncogenic transcripts could be spatially mapped to distinct patterns within the ultrasound images of conventional colon cancer and were involved in classical cancer-related pathway. <b><i>Conclusions:</i></b> AI-enhanced ultrasound imaging enables noninvasive identification of rare triphasic colon cancer and reveals functional molecular signatures in general colon cancer. This integrative approach may support future precision diagnostics and image-guided therapies.

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

Chengqi Dong, Fenghe Tang, Rongge Mao, Xinpei Gao, S. Kevin Zhou

arxiv logopreprintAug 21 2025
Medical image segmentation plays a pivotal role in disease diagnosis and treatment planning, particularly in resource-constrained clinical settings where lightweight and generalizable models are urgently needed. However, existing lightweight models often compromise performance for efficiency and rarely adopt computationally expensive attention mechanisms, severely restricting their global contextual perception capabilities. Additionally, current architectures neglect the channel redundancy issue under the same convolutional kernels in medical imaging, which hinders effective feature extraction. To address these challenges, we propose LGMSNet, a novel lightweight framework based on local and global dual multiscale that achieves state-of-the-art performance with minimal computational overhead. LGMSNet employs heterogeneous intra-layer kernels to extract local high-frequency information while mitigating channel redundancy. In addition, the model integrates sparse transformer-convolutional hybrid branches to capture low-frequency global information. Extensive experiments across six public datasets demonstrate LGMSNet's superiority over existing state-of-the-art methods. In particular, LGMSNet maintains exceptional performance in zero-shot generalization tests on four unseen datasets, underscoring its potential for real-world deployment in resource-limited medical scenarios. The whole project code is in https://github.com/cq-dong/LGMSNet.

Malini Shivaram, Gautam Rajendrakumar Gare, Laura Hutchins, Jacob Duplantis, Thomas Deiss, Thales Nogueira Gomes, Thong Tran, Keyur H. Patel, Thomas H Fox, Amita Krishnan, Deva Ramanan, Bennett DeBoisblanc, Ricardo Rodriguez, John Galeotti

arxiv logopreprintAug 21 2025
In medical imaging, inter-observer variability among radiologists often introduces label uncertainty, particularly in modalities where visual interpretation is subjective. Lung ultrasound (LUS) is a prime example-it frequently presents a mixture of highly ambiguous regions and clearly discernible structures, making consistent annotation challenging even for experienced clinicians. In this work, we introduce a novel approach to both labeling and training AI models using expert-supplied, per-pixel confidence values. Rather than treating annotations as absolute ground truth, we design a data annotation protocol that captures the confidence that radiologists have in each labeled region, modeling the inherent aleatoric uncertainty present in real-world clinical data. We demonstrate that incorporating these confidence values during training leads to improved segmentation performance. More importantly, we show that this enhanced segmentation quality translates into better performance on downstream clinically-critical tasks-specifically, estimating S/F oxygenation ratio values, classifying S/F ratio change, and predicting 30-day patient readmission. While we empirically evaluate many methods for exposing the uncertainty to the learning model, we find that a simple approach that trains a model on binarized labels obtained with a (60%) confidence threshold works well. Importantly, high thresholds work far better than a naive approach of a 50% threshold, indicating that training on very confident pixels is far more effective. Our study systematically investigates the impact of training with varying confidence thresholds, comparing not only segmentation metrics but also downstream clinical outcomes. These results suggest that label confidence is a valuable signal that, when properly leveraged, can significantly enhance the reliability and clinical utility of AI in medical imaging.

Mengyu Wang, Zhenyu Liu, Kun Li, Yu Wang, Yuwei Wang, Yanyan Wei, Fei Wang

arxiv logopreprintAug 21 2025
Multimodal Image Fusion (MMIF) aims to integrate complementary information from different imaging modalities to overcome the limitations of individual sensors. It enhances image quality and facilitates downstream applications such as remote sensing, medical diagnostics, and robotics. Despite significant advancements, current MMIF methods still face challenges such as modality misalignment, high-frequency detail destruction, and task-specific limitations. To address these challenges, we propose AdaSFFuse, a novel framework for task-generalized MMIF through adaptive cross-domain co-fusion learning. AdaSFFuse introduces two key innovations: the Adaptive Approximate Wavelet Transform (AdaWAT) for frequency decoupling, and the Spatial-Frequency Mamba Blocks for efficient multimodal fusion. AdaWAT adaptively separates the high- and low-frequency components of multimodal images from different scenes, enabling fine-grained extraction and alignment of distinct frequency characteristics for each modality. The Spatial-Frequency Mamba Blocks facilitate cross-domain fusion in both spatial and frequency domains, enhancing this process. These blocks dynamically adjust through learnable mappings to ensure robust fusion across diverse modalities. By combining these components, AdaSFFuse improves the alignment and integration of multimodal features, reduces frequency loss, and preserves critical details. Extensive experiments on four MMIF tasks -- Infrared-Visible Image Fusion (IVF), Multi-Focus Image Fusion (MFF), Multi-Exposure Image Fusion (MEF), and Medical Image Fusion (MIF) -- demonstrate AdaSFFuse's superior fusion performance, ensuring both low computational cost and a compact network, offering a strong balance between performance and efficiency. The code will be publicly available at https://github.com/Zhen-yu-Liu/AdaSFFuse.

Uğurcan Akyüz, Deniz Katircioglu-Öztürk, Emre K. Süslü, Burhan Keleş, Mete C. Kaya, Gamze Durhan, Meltem G. Akpınar, Figen B. Demirkazık, Gözde B. Akar

arxiv logopreprintAug 21 2025
Numerous deep learning-based solutions have been developed for the automatic recognition of breast cancer using mammography images. However, their performance often declines when applied to data from different domains, primarily due to domain shift - the variation in data distributions between source and target domains. This performance drop limits the safe and equitable deployment of AI in real-world clinical settings. In this study, we present DoSReMC (Domain Shift Resilient Mammography Classification), a batch normalization (BN) adaptation framework designed to enhance cross-domain generalization without retraining the entire model. Using three large-scale full-field digital mammography (FFDM) datasets - including HCTP, a newly introduced, pathologically confirmed in-house dataset - we conduct a systematic cross-domain evaluation with convolutional neural networks (CNNs). Our results demonstrate that BN layers are a primary source of domain dependence: they perform effectively when training and testing occur within the same domain, and they significantly impair model generalization under domain shift. DoSReMC addresses this limitation by fine-tuning only the BN and fully connected (FC) layers, while preserving pretrained convolutional filters. We further integrate this targeted adaptation with an adversarial training scheme, yielding additional improvements in cross-domain generalizability. DoSReMC can be readily incorporated into existing AI pipelines and applied across diverse clinical environments, providing a practical pathway toward more robust and generalizable mammography classification systems.

Aqib Nazir Mir, Danish Raza Rizvi

arxiv logopreprintAug 21 2025
This study comprehensively explores knowledge distillation frameworks for COVID-19 and lung cancer classification using chest X-ray (CXR) images. We employ high-capacity teacher models, including VGG19 and lightweight Vision Transformers (Visformer-S and AutoFormer-V2-T), to guide the training of a compact, hardware-aware student model derived from the OFA-595 supernet. Our approach leverages hybrid supervision, combining ground-truth labels with teacher models' soft targets to balance accuracy and computational efficiency. We validate our models on two benchmark datasets: COVID-QU-Ex and LCS25000, covering multiple classes, including COVID-19, healthy, non-COVID pneumonia, lung, and colon cancer. To interpret the spatial focus of the models, we employ Score-CAM-based visualizations, which provide insight into the reasoning process of both teacher and student networks. The results demonstrate that the distilled student model maintains high classification performance with significantly reduced parameters and inference time, making it an optimal choice in resource-constrained clinical environments. Our work underscores the importance of combining model efficiency with explainability for practical, trustworthy medical AI solutions.

Jeonghyun Noh, Hyun-Jic Oh, Byungju Chae, Won-Ki Jeong

arxiv logopreprintAug 21 2025
Computed tomography (CT) is widely used in clinical diagnosis, but acquiring high-resolution (HR) CT is limited by radiation exposure risks. Deep learning-based super-resolution (SR) methods have been studied to reconstruct HR from low-resolution (LR) inputs. While supervised SR approaches have shown promising results, they require large-scale paired LR-HR volume datasets that are often unavailable. In contrast, zero-shot methods alleviate the need for paired data by using only a single LR input, but typically struggle to recover fine anatomical details due to limited internal information. To overcome these, we propose a novel zero-shot 3D CT SR framework that leverages upsampled 2D X-ray projection priors generated by a diffusion model. Exploiting the abundance of HR 2D X-ray data, we train a diffusion model on large-scale 2D X-ray projection and introduce a per-projection adaptive sampling strategy. It selects the generative process for each projection, thus providing HR projections as strong external priors for 3D CT reconstruction. These projections serve as inputs to 3D Gaussian splatting for reconstructing a 3D CT volume. Furthermore, we propose negative alpha blending (NAB-GS) that allows negative values in Gaussian density representation. NAB-GS enables residual learning between LR and diffusion-based projections, thereby enhancing high-frequency structure reconstruction. Experiments on two datasets show that our method achieves superior quantitative and qualitative results for 3D CT SR.
Page 240 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.