Sort by:
Page 44 of 3533529 results

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.
Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.
Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

A Cohort Study of Pediatric Severe Community-Acquired Pneumonia Involving AI-Based CT Image Parameters and Electronic Health Record Data.

He M, Yuan J, Liu A, Pu R, Yu W, Wang Y, Wang L, Nie X, Yi J, Xue H, Xie J

pubmed logopapersAug 8 2025
Community-acquired pneumonia (CAP) is a significant concern for children worldwide and is associated with a high morbidity and mortality. To improve patient outcomes, early intervention and accurate diagnosis are essential. Artificial intelligence (AI) can mine and label imaging data and thus may contribute to precision research and personalized clinical management. The baseline characteristics of 230 children with severe CAP hospitalized from January 2023 to October 2024 were retrospectively analyzed. The patients were divided into two groups according to the presence of respiratory failure. The predictive ability of AI-derived chest CT (computed tomography) indices alone for respiratory failure was assessed via logistic regression analysis. ROC (receiver operating characteristic) curves were plotted for these regression models. After adjusting for age, white blood cell count, neutrophils, lymphocytes, creatinine, wheezing, and fever > 5 days, a greater number of involved lung lobes [odds ratio 1.347, 95% confidence interval (95% CI) 1.036-1.750, P = 0.026] and bilateral lung involvement (odds ratio 2.734, 95% CI 1.084-6.893, P = 0.033) were significantly associated with respiratory failure. The discriminatory power (as measured by the area under curve) of Model 2 and Model 3, which included electronic health record data and the accuracy of CT imaging features, was better than that of Model 0 and Model 1, which contained only the chest CT parameters. The sensitivity and specificity of Model 2 at the optimal critical value (0.441) were 84.3% and 59.8%, respectively. The sensitivity and specificity of Model 3 at the optimal critical value (0.446) were 68.6% and 76.0%, respectively. The use of AI-derived chest CT indices may achieve high diagnostic accuracy and guide precise interventions for patients with severe CAP. However, clinical, laboratory, and AI-derived chest CT indices should be included to accurately predict and treat severe CAP.

MRI-based radiomics for preoperative T-staging of rectal cancer: a retrospective analysis.

Patanè V, Atripaldi U, Sansone M, Marinelli L, Del Tufo S, Arrichiello G, Ciardiello D, Selvaggi F, Martinelli E, Reginelli A

pubmed logopapersAug 8 2025
Preoperative T-staging in rectal cancer is essential for treatment planning, yet conventional MRI shows limited accuracy (~ 60-78). Our study investigates whether radiomic analysis of high-resolution T2-weighted MRI can non-invasively improve staging accuracy through a retrospective evaluation in a real-world surgical cohort. This single-center retrospective study included 200 patients (January 2024-April 2025) with pathologically confirmed rectal cancer, all undergoing preoperative high-resolution T2-weighted MRI within one week prior to curative surgery and no neoadjuvant therapy. Manual segmentation was performed using ITK‑SNAP, followed by extraction of 107 radiomic features via PyRadiomics. Feature selection employed mRMR and LASSO logistic regression, culminating in a Rad-score predictive model. Statistical performance was evaluated using ROC curves (AUC), accuracy, sensitivity, specificity, and Delong's test. Among 200 patients, 95 were pathologically staged as T2 and 105 as T3-T4 (55 T3, 50 T4). After preprocessing, 26 radiomic features were retained; key features including ngtdm_contrast and ngtdm_coarseness showed AUC values > 0.70. The LASSO-based model achieved an AUC of 0.82 (95% CI: 0.75-0.89), with overall accuracy of 81%, sensitivity of 78%, and specificity of 84%. Radiomic analysis of standard preoperative T2-weighted MRI provides a reliable, non-invasive method to predict rectal cancer T-stage. This approach has the potential to enhance staging accuracy and inform personalized surgical planning. Prospective multicenter validation is required for broader clinical implementation.

SamRobNODDI: q-space sampling-augmented continuous representation learning for robust and generalized NODDI.

Xiao T, Cheng J, Fan W, Dong E, Wang S

pubmed logopapersAug 8 2025
Neurite Orientation Dispersion and Density Imaging (NODDI) microstructure estimation from diffusion magnetic resonance imaging (dMRI) is of great significance for the discovery and treatment of various neurological diseases. Current deep learning-based methods accelerate the speed of NODDI parameter estimation and improve the accuracy. However, most methods require the number and coordinates of gradient directions during testing and training to remain strictly consistent, significantly limiting the generalization and robustness of these models in NODDI parameter estimation. Therefore, it is imperative to develop methods that can perform robustly under varying diffusion gradient directions. In this paper, we propose a q-space sampling augmentation-based continuous representation learning framework (SamRobNODDI) to achieve robust and generalized NODDI. Specifically, a continuous representation learning method based on q-space sampling augmentation is introduced to fully explore the information between different gradient directions in q- space. Furthermore, we design a sampling consistency loss to constrain the outputs of different sampling schemes, ensuring that the outputs remain as consistent as possible, thereby further enhancing performance and robustness to varying q-space sampling schemes. SamRobNODDI is also a flexible framework that can be applied to different backbone networks. SamRobNODDI was compared against seven state-of-the-art methods across 18 diverse q-space sampling schemes. Extensive experimental validations have been conducted under both identical and diverse sampling schemes for training and testing, as well as across varying sampling rates, different loss functions, and multiple network backbones. Results demonstrate that the proposed SamRobNODDI has better performance, robustness, generalization, and flexibility in the face of varying q-space sampling schemes.&#xD.

GPT-4 vs. Radiologists: who advances mediastinal tumor classification better across report quality levels? A cohort study.

Wen R, Li X, Chen K, Sun M, Zhu C, Xu P, Chen F, Ji C, Mi P, Li X, Deng X, Yang Q, Song W, Shang Y, Huang S, Zhou M, Wang J, Zhou C, Chen W, Liu C

pubmed logopapersAug 8 2025
Accurate mediastinal tumor classification is crucial for treatment planning, but diagnostic performance varies with radiologists' experience and report quality. To evaluate GPT-4's diagnostic accuracy in classifying mediastinal tumors from radiological reports compared to radiologists of different experience levels using radiological reports of varying quality. We conducted a retrospective study of 1,494 patients from five tertiary hospitals with mediastinal tumors diagnosed via chest CT and pathology. Radiological reports were categorized into low-, medium-, and high-quality based on predefined criteria assessed by experienced radiologists. Six radiologists (two residents, two attending radiologists, and two associate senior radiologists) and GPT-4 evaluated the chest CT reports. Diagnostic performance was analyzed overall, by report quality, and by tumor type using Wald χ2 tests and 95% CIs calculated via the Wilson method. GPT-4 achieved an overall diagnostic accuracy of 73.3% (95% CI: 71.0-75.5), comparable to associate senior radiologists (74.3%, 95% CI: 72.0-76.5; p >0.05). For low-quality reports, GPT-4 outperformed associate senior radiologists (60.8% vs. 51.1%, p<0.001). In high-quality reports, GPT-4 was comparable to attending radiologists (80.6% vs.79.4%, p>0.05). Diagnostic performance varied by tumor type: GPT-4 was comparable to radiology residents for neurogenic tumors (44.9% vs. 50.3%, p>0.05), similar to associate senior radiologists for teratomas (68.1% vs. 65.9%, p>0.05), and superior in diagnosing lymphoma (75.4% vs. 60.4%, p<0.001). GPT-4 demonstrated interpretation accuracy comparable to Associate Senior Radiologists, excelling in low-quality reports and outperforming them in diagnosing lymphoma. These findings underscore GPT-4's potential to enhance diagnostic performance in challenging diagnostic scenarios.

Deep Learning Chest X-Ray Age, Epigenetic Aging Clocks and Associations with Age-Related Subclinical Disease in the Project Baseline Health Study.

Chandra J, Short S, Rodriguez F, Maron DJ, Pagidipati N, Hernandez AF, Mahaffey KW, Shah SH, Kiel DP, Lu MT, Raghu VK

pubmed logopapersAug 8 2025
Chronological age is an important component of medical risk scores and decision-making. However, there is considerable variability in how individuals age. We recently published an open-source deep learning model to assess biological age from chest radiographs (CXR-Age), which predicts all-cause and cardiovascular mortality better than chronological age. Here, we compare CXR-Age to two established epigenetic aging clocks (First generation-Horvath Age; Second generation-DNAm PhenoAge) to test which is more strongly associated with cardiopulmonary disease and frailty. Our cohort consisted of 2,097 participants from the Project Baseline Health Study, a prospective cohort study of individuals from four US sites. We compared the association between the different aging clocks and measures of cardiopulmonary disease, frailty, and protein abundance collected at the participant's first annual visit using linear regression models adjusted for common confounders. We found that CXR-Age was associated with coronary calcium, cardiovascular risk factors, worsening pulmonary function, increased frailty, and abundance in plasma of two proteins implicated in neuroinflammation and aging. Associations with DNAm PhenoAge were weaker for pulmonary function and all metrics in middle-age adults. We identified thirteen proteins that were associated with DNAm PhenoAge, one (CDH13) of which was also associated with CXR-Age. No associations were found with Horvath Age. These results suggest that CXR-Age may serve as a better metric of cardiopulmonary aging than epigenetic aging clocks, especially in midlife adults.

Fourier Optics and Deep Learning Methods for Fast 3D Reconstruction in Digital Holography

Justin London

arxiv logopreprintAug 8 2025
Computer-generated holography (CGH) is a promising method that modulates user-defined waveforms with digital holograms. An efficient and fast pipeline framework is proposed to synthesize CGH using initial point cloud and MRI data. This input data is reconstructed into volumetric objects that are then input into non-convex Fourier optics optimization algorithms for phase-only hologram (POH) and complex-hologram (CH) generation using alternating projection, SGD, and quasi-Netwton methods. Comparison of reconstruction performance of these algorithms as measured by MSE, RMSE, and PSNR is analyzed as well as to HoloNet deep learning CGH. Performance metrics are shown to be improved by using 2D median filtering to remove artifacts and speckled noise during optimization.

Can Diffusion Models Bridge the Domain Gap in Cardiac MR Imaging?

Xin Ci Wong, Duygu Sarikaya, Kieran Zucker, Marc De Kamps, Nishant Ravikumar

arxiv logopreprintAug 8 2025
Magnetic resonance (MR) imaging, including cardiac MR, is prone to domain shift due to variations in imaging devices and acquisition protocols. This challenge limits the deployment of trained AI models in real-world scenarios, where performance degrades on unseen domains. Traditional solutions involve increasing the size of the dataset through ad-hoc image augmentation or additional online training/transfer learning, which have several limitations. Synthetic data offers a promising alternative, but anatomical/structural consistency constraints limit the effectiveness of generative models in creating image-label pairs. To address this, we propose a diffusion model (DM) trained on a source domain that generates synthetic cardiac MR images that resemble a given reference. The synthetic data maintains spatial and structural fidelity, ensuring similarity to the source domain and compatibility with the segmentation mask. We assess the utility of our generative approach in multi-centre cardiac MR segmentation, using the 2D nnU-Net, 3D nnU-Net and vanilla U-Net segmentation networks. We explore domain generalisation, where, domain-invariant segmentation models are trained on synthetic source domain data, and domain adaptation, where, we shift target domain data towards the source domain using the DM. Both strategies significantly improved segmentation performance on data from an unseen target domain, in terms of surface-based metrics (Welch's t-test, p < 0.01), compared to training segmentation models on real data alone. The proposed method ameliorates the need for transfer learning or online training to address domain shift challenges in cardiac MR image analysis, especially useful in data-scarce settings.

Postmortem Validation of Quantitative MRI for White Matter Hyperintensities in Alzheimer's Disease

Mojtabai, M., Kumar, R., Honnorat, N., Li, K., Wang, D., Li, J., Lee, R. F., Richardson, T. E., Cavazos, J. E., Bouhrara, M., Toledo, J. B., Heckbert, S., Flanagan, M. E., Bieniek, K. F., Walker, J. M., Seshadri, S., Habes, M.

medrxiv logopreprintAug 8 2025
White matter hyperintensities (WMH) are frequently observed on MRI in aging and Alzheimers disease (AD), yet their microstructural pathology remains poorly characterized. Conventional MRI sequences provide limited information to describe the tissue abnormalities underlying WMH, while histopathology--the gold standard--can only be applied postmortem. Quantitative MRI (qMRI) offers promising non-invasive alternatives to postmortem histopathology, but lacks histological validation of these metrics in AD. In this study, we examined the relationship between MRI metrics and histopathology in postmortem brain scans from eight donors with AD from the South Texas Alzheimers Disease Research Center. Regions of interest are delineated by aligning MRI-identified WMH in the brain donor scans with postmortem histological sections. Histopathological features, including myelin integrity, tissue vacuolation, and gliosis, are quantified within these regions using machine learning. We report the correlations between these histopathological measures and two qMRI metrics: T2 and absolute myelin water signal (aMWS) maps, as well as conventional T1w/T2w MRI. The results derived from aMWS and T2 mapping indicate a strong association between WMH, myelin loss, and increased tissue vacuolation. Bland-Altman analyses indicated that T2 mapping showed more consistent agreement with histopathology, whereas the derived aMWS demonstrated signs of systematic bias. T1w/T2w values exhibited weaker associations with histological alterations. Additionally, we observed distinct patterns of gliosis in periventricular and subcortical WMH. Our study presents one of the first histopathological validations of qMRI in AD, confirming that aMWS and T2 mapping are robust, non-invasive biomarkers that offer promising ways to monitor white matter pathology in neurodegenerative disorders.

Transformer-Based Explainable Deep Learning for Breast Cancer Detection in Mammography: The MammoFormer Framework

Ojonugwa Oluwafemi Ejiga Peter, Daniel Emakporuena, Bamidele Dayo Tunde, Maryam Abdulkarim, Abdullahi Bn Umar

arxiv logopreprintAug 8 2025
Breast cancer detection through mammography interpretation remains difficult because of the minimal nature of abnormalities that experts need to identify alongside the variable interpretations between readers. The potential of CNNs for medical image analysis faces two limitations: they fail to process both local information and wide contextual data adequately, and do not provide explainable AI (XAI) operations that doctors need to accept them in clinics. The researcher developed the MammoFormer framework, which unites transformer-based architecture with multi-feature enhancement components and XAI functionalities within one framework. Seven different architectures consisting of CNNs, Vision Transformer, Swin Transformer, and ConvNext were tested alongside four enhancement techniques, including original images, negative transformation, adaptive histogram equalization, and histogram of oriented gradients. The MammoFormer framework addresses critical clinical adoption barriers of AI mammography systems through: (1) systematic optimization of transformer architectures via architecture-specific feature enhancement, achieving up to 13% performance improvement, (2) comprehensive explainable AI integration providing multi-perspective diagnostic interpretability, and (3) a clinically deployable ensemble system combining CNN reliability with transformer global context modeling. The combination of transformer models with suitable feature enhancements enables them to achieve equal or better results than CNN approaches. ViT achieves 98.3% accuracy alongside AHE while Swin Transformer gains a 13.0% advantage through HOG enhancements
Page 44 of 3533529 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.