Sort by:
Page 124 of 6446438 results

Alec K. Peltekian, Karolina Senkow, Gorkem Durak, Kevin M. Grudzinski, Bradford C. Bemiss, Jane E. Dematte, Carrie Richardson, Nikolay S. Markov, Mary Carns, Kathleen Aren, Alexandra Soriano, Matthew Dapas, Harris Perlman, Aaron Gundersheimer, Kavitha C. Selvan, John Varga, Monique Hinchcliff, Krishnan Warrior, Catherine A. Gao, Richard G. Wunderink, GR Scott Budinger, Alok N. Choudhary, Anthony J. Esposito, Alexander V. Misharin, Ankit Agrawal, Ulas Bagci

arxiv logopreprintSep 27 2025
Interstitial lung disease (ILD) is a leading cause of morbidity and mortality in systemic sclerosis (SSc). Chest computed tomography (CT) is the primary imaging modality for diagnosing and monitoring lung complications in SSc patients. However, its role in disease progression and mortality prediction has not yet been fully clarified. This study introduces a novel, large-scale longitudinal chest CT analysis framework that utilizes radiomics and deep learning to predict mortality associated with lung complications of SSc. We collected and analyzed 2,125 CT scans from SSc patients enrolled in the Northwestern Scleroderma Registry, conducting mortality analyses at one, three, and five years using advanced imaging analysis techniques. Death labels were assigned based on recorded deaths over the one-, three-, and five-year intervals, confirmed by expert physicians. In our dataset, 181, 326, and 428 of the 2,125 CT scans were from patients who died within one, three, and five years, respectively. Using ResNet-18, DenseNet-121, and Swin Transformer we use pre-trained models, and fine-tuned on 2,125 images of SSc patients. Models achieved an AUC of 0.769, 0.801, 0.709 for predicting mortality within one-, three-, and five-years, respectively. Our findings highlight the potential of both radiomics and deep learning computational methods to improve early detection and risk assessment of SSc-related interstitial lung disease, marking a significant advancement in the literature.

Md. Saiful Bari Siddiqui, Mohammed Imamul Hassan Bhuiyan

arxiv logopreprintSep 27 2025
Convolutional Neural Networks have become a cornerstone of medical image analysis due to their proficiency in learning hierarchical spatial features. However, this focus on a single domain is inefficient at capturing global, holistic patterns and fails to explicitly model an image's frequency-domain characteristics. To address these challenges, we propose the Spatial-Spectral Summarizer Fusion Network (S$^3$F-Net), a dual-branch framework that learns from both spatial and spectral representations simultaneously. The S$^3$F-Net performs a fusion of a deep spatial CNN with our proposed shallow spectral encoder, SpectraNet. SpectraNet features the proposed SpectralFilter layer, which leverages the Convolution Theorem by applying a bank of learnable filters directly to an image's full Fourier spectrum via a computation-efficient element-wise multiplication. This allows the SpectralFilter layer to attain a global receptive field instantaneously, with its output being distilled by a lightweight summarizer network. We evaluate S$^3$F-Net across four medical imaging datasets spanning different modalities to validate its efficacy and generalizability. Our framework consistently and significantly outperforms its strong spatial-only baseline in all cases, with accuracy improvements of up to 5.13%. With a powerful Bilinear Fusion, S$^3$F-Net achieves a SOTA competitive accuracy of 98.76% on the BRISC2025 dataset. Concatenation Fusion performs better on the texture-dominant Chest X-Ray Pneumonia dataset, achieving 93.11% accuracy, surpassing many top-performing, much deeper models. Our explainability analysis also reveals that the S$^3$F-Net learns to dynamically adjust its reliance on each branch based on the input pathology. These results verify that our dual-domain approach is a powerful and generalizable paradigm for medical image analysis.

Yuyang Sun, Junchuan Yu, Cuiming Zou

arxiv logopreprintSep 27 2025
Fracture detection plays a critical role in medical imaging analysis, traditional fracture diagnosis relies on visual assessment by experienced physicians, however the speed and accuracy of this approach are constrained by the expertise. With the rapid advancements in artificial intelligence, deep learning models based on the YOLO framework have been widely employed for fracture detection, demonstrating significant potential in improving diagnostic efficiency and accuracy. This study proposes an improved YOLO-based model, termed Fracture-YOLO, which integrates novel Critical-Region-Selector Attention (CRSelector) and Scale-Aware (ScA) heads to further enhance detection performance. Specifically, the CRSelector module utilizes global texture information to focus on critical features of fracture regions. Meanwhile, the ScA module dynamically adjusts the weights of features at different scales, enhancing the model's capacity to identify fracture targets at multiple scales. Experimental results demonstrate that, compared to the baseline model, Fracture-YOLO achieves a significant improvement in detection precision, with mAP50 and mAP50-95 increasing by 4 and 3, surpassing the baseline model and achieving state-of-the-art (SOTA) performance.

Donghao Zhang, Yimin Chen, Kauê TN Duarte, Taha Aslan, Mohamed AlShamrani, Brij Karmur, Yan Wan, Shengcai Chen, Bo Hu, Bijoy K Menon, Wu Qiu

arxiv logopreprintSep 27 2025
Non-contrast computed tomography (NCCT) is essential for rapid stroke diagnosis but is limited by low image contrast and signal to noise ratio. We address this challenge by leveraging DINOv3, a state-of-the-art self-supervised vision transformer, to generate powerful feature representations for a comprehensive set of stroke analysis tasks. Our evaluation encompasses infarct and hemorrhage segmentation, anomaly classification (normal vs. stroke and normal vs. infarct vs. hemorrhage), hemorrhage subtype classification (EDH, SDH, SAH, IPH, IVH), and dichotomized ASPECTS classification (<=6 vs. >6) on multiple public and private datasets. This study establishes strong benchmarks for these tasks and demonstrates the potential of advanced self-supervised models to improve automated stroke diagnosis from NCCT, providing a clear analysis of both the advantages and current constraints of the approach. The code is available at https://github.com/Zzz0251/DINOv3-stroke.

Lin Tian, Xiaoling Hu, Juan Eugenio Iglesias

arxiv logopreprintSep 27 2025
Accurate image registration is essential for downstream applications, yet current deep registration networks provide limited indications of whether and when their predictions are reliable. Existing uncertainty estimation strategies, such as Bayesian methods, ensembles, or MC dropout, require architectural changes or retraining, limiting their applicability to pretrained registration networks. Instead, we propose a test-time uncertainty estimation framework that is compatible with any pretrained networks. Our framework is grounded in the transformation equivariance property of registration, which states that the true mapping between two images should remain consistent under spatial perturbations of the input. By analyzing the variance of network predictions under such perturbations, we derive a theoretical decomposition of perturbation-based uncertainty in registration. This decomposition separates into two terms: (i) an intrinsic spread, reflecting epistemic noise, and (ii) a bias jitter, capturing how systematic error drifts under perturbations. Across four anatomical structures (brain, cardiac, abdominal, and lung) and multiple registration models (uniGradICON, SynthMorph), the uncertainty maps correlate consistently with registration errors and highlight regions requiring caution. Our framework turns any pretrained registration network into a risk-aware tool at test time, placing medical image registration one step closer to safe deployment in clinical and large-scale research settings.

Fahad Mostafa, Kannon Hossain, Hafiz Khan

arxiv logopreprintSep 27 2025
Early and accurate diagnosis of Alzheimer Disease is critical for effective clinical intervention, particularly in distinguishing it from Mild Cognitive Impairment, a prodromal stage marked by subtle structural changes. In this study, we propose a hybrid deep learning ensemble framework for Alzheimer Disease classification using structural magnetic resonance imaging. Gray and white matter slices are used as inputs to three pretrained convolutional neural networks such as ResNet50, NASNet, and MobileNet, each fine tuned through an end to end process. To further enhance performance, we incorporate a stacked ensemble learning strategy with a meta learner and weighted averaging to optimally combine the base models. Evaluated on the Alzheimer Disease Neuroimaging Initiative dataset, the proposed method achieves state of the art accuracy of 99.21% for Alzheimer Disease vs. Mild Cognitive Impairment and 91.0% for Mild Cognitive Impairment vs. Normal Controls, outperforming conventional transfer learning and baseline ensemble methods. To improve interpretability in image based diagnostics, we integrate Explainable AI techniques by Gradient weighted Class Activation, which generates heatmaps and attribution maps that highlight critical regions in gray and white matter slices, revealing structural biomarkers that influence model decisions. These results highlight the frameworks potential for robust and scalable clinical decision support in neurodegenerative disease diagnostics.

Liu, Y. E., Bortolotto Bampi, J. V., Arthur, R. F., Salindri, A. D., Busatto, C., Avedillo Jimenez, P., Pelissari, D. M., Dockhorn CostaJohansen, F., Arana-Narvaez, R., Moreno Roca, A. F., Solis Tupes, W. S., Mori Jiu, E., Moreno Roca, C. A., Abregu Contreras, E. A., Alarcon Guizado, V. A., Trujillo Trujillo, J., Marcelino, B., Gonzalez, M. A., Cordova Ayllon, M. C., Cohen, T., Huaman, M. A., Goldhaber-Fiebert, J. D., Croda, J., Andrews, J. R.

medrxiv logopreprintSep 27 2025
BackgroundIncarceration is a leading driver of tuberculosis in Latin America. Active case-finding in prisons may reduce population-wide tuberculosis burden, but optimal strategies and cost-effectiveness remain uncertain. Methods and findingsUsing dynamic transmission models calibrated to Brazil, Colombia, and Peru, we simulated annual or biannual (twice-yearly) prison-wide screening, alone or combined with entry and exit screening from 2026-2035. We evaluated four algorithms: 1) symptom screening, 2) chest X-ray with computer-aided detection (CXR-CAD), 3) symptoms and CXR-CAD (follow-up testing if either is positive) and 4) GeneXpert Ultra with pooled sputum. Individuals screening positive then received individual Xpert. We projected impacts on within-prison and population-level tuberculosis incidence in 2035, along with discounted costs (2023 USD) and disability-adjusted life years (DALYs). Model projections showed that combined entry, exit, and biannual screening with CXR-CAD was highly impactful and cost-effective across countries, reducing tuberculosis incidence by 62-87% in prisons and 18-28% population-wide. Compared to only biannual CXR-CAD (the next best strategy), the incremental cost per DALY averted of adding entry and exit screening was $2984 (Brazil), $2925 (Colombia), and $645 (Peru). Adding symptom screening to CXR-CAD marginally increased benefit and was only cost-effective in Perus higher-incidence prisons. Biannual screening alone remained cost-effective at prison incidence levels well below national averages. In settings without CXR-CAD, pooled Xpert was an impactful, cost-effective alternative. ConclusionsThese modeling results support immediate national-level adoption of prison-wide tuberculosis screening twice-yearly and at entry and exit. Screening should begin with available methods while building capacity for CXR-CAD, the most cost-effective algorithm. AUTHOR SUMMARYO_ST_ABSWhy was this study done?C_ST_ABSO_LIIn Latin America, rising incarceration has fueled the tuberculosis epidemic, with extremely high infection rates among people deprived of liberty. These effects extend beyond prison walls, driving tuberculosis spread in outside communities. C_LIO_LIInterventions targeted to prisons may have an outsized impact on reducing tuberculosis in the broader population. C_LIO_LIThe World Health Organization strongly recommends systematic screening for tuberculosis in prisons, but there is little evidence on how often to screen, which methods to use, and whether these approaches are cost-effective across different country contexts. C_LI What did the researchers do and find?O_LIWe developed mathematical models using data from Brazil, Colombia, and Peru to simulate different prison-based tuberculosis screening strategies and project their health impacts and costs. C_LIO_LIWe compared prison-wide screening once or twice a year, screening at prison entry or exit, and combinations of these approaches. We also compared different screening methods using symptoms, chest X-ray with computer-aided detection (CXR-CAD), or pooled molecular testing (GeneXpert Ultra). C_LIO_LIThe models projected that combining entry, exit, and twice-yearly prison-wide screening with CXR-CAD would be highly impactful and cost-effective in all three countries. This strategy could substantially reduce tuberculosis in prisons and in the general population. C_LIO_LITwice-yearly prison-wide screening remained cost-effective even in prisons with much lower tuberculosis rates than national averages. C_LIO_LICXR-CAD was the optimal screening method, but pooled molecular testing was also impactful and cost-effective where CXR-CAD was not available. C_LI Implications of all the available evidenceO_LISystematic screening in prisons, twice-yearly and at entry and exit, is projected to be highly impactful and cost-effective across different settings in Latin America. C_LIO_LIThese findings support urgent adoption of intensive prison-based tuberculosis screening throughout the region, starting with the best available diagnostic tools while investing in CXR-CAD. C_LI

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

Gagliardo C, Feraco P, Contrino E, D'Angelo C, Geraci L, Salvaggio G, Gagliardo A, La Grutta L, Midiri M, Marrale M

pubmed logopapersSep 26 2025
Ultra-low-field magnetic resonance imaging (ULF-MRI), operating below 0.2 Tesla, is gaining renewed interest as a re-emerging diagnostic modality in a field dominated by high- and ultra-high-field systems. Recent advances in magnet design, RF coils, pulse sequences, and AI-based reconstruction have significantly enhanced image quality, mitigating traditional limitations such as low signal- and contrast-to-noise ratio and reduced spatial resolution. ULF-MRI offers distinct advantages: reduced susceptibility artifacts, safer imaging in patients with metallic implants, low power consumption, and true portability for point-of-care use. This narrative review synthesizes the physical foundations, technological advances, and emerging clinical applications of ULF-MRI. A focused literature search across PubMed, Scopus, IEEE Xplore, and Google Scholar was conducted up to August 11, 2025, using combined keywords targeting hardware, software, and clinical domains. Inclusion emphasized scientific rigor and thematic relevance. A comparative analysis with other imaging modalities highlights the specific niche ULF-MRI occupies within the broader diagnostic landscape. Future directions and challenges for clinical translation are explored. In a world increasingly polarized between the push for ultra-high-field excellence and the need for accessible imaging, ULF-MRI embodies a modern "David versus Goliath" theme, offering a sustainable, democratizing force capable of expanding MRI access to anyone, anywhere.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.
Page 124 of 6446438 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.