Sort by:
Page 33 of 2352341 results

Exploring Women's Perceptions of Traditional Mammography and the Concept of AI-Driven Thermography to Improve the Breast Cancer Screening Journey: Mixed Methods Study.

Sirka Kacafírková K, Poll A, Jacobs A, Cardone A, Ventura JJ

pubmed logopapersSep 10 2025
Breast cancer is the most common cancer among women and a leading cause of mortality in Europe. Early detection through screening reduces mortality, yet participation in mammography-based programs remains suboptimal due to discomfort, radiation exposure, and accessibility issues. Thermography, particularly when driven by artificial intelligence (AI), is being explored as a noninvasive, radiation-free alternative. However, its acceptance, reliability, and impact on the screening experience remain underexplored. This study aimed to explore women's perceptions of AI-enhanced thermography (ThermoBreast) as an alternative to mammography. It aims to identify barriers and motivators related to breast cancer screening and assess how ThermoBreast might improve the screening experience. A mixed methods approach was adopted, combining an online survey with follow-up focus groups. The survey captured women's knowledge, attitudes, and experiences related to breast cancer screening and was used to recruit participants for qualitative exploration. After the focus groups, the survey was relaunched to include additional respondents. Quantitative data were analyzed using SPSS (IBM Corp), and qualitative data were analyzed in MAXQDA (VERBI software). Findings from both strands were synthesized to redesign the breast cancer screening journey. A total of 228 valid survey responses were analyzed. Of 228, 154 women (68%) had previously undergone mammography, while 74 (32%) had not. The most reported motivators were belief in prevention (69/154, 45%), invitations from screening programs (68/154, 44%), and doctor recommendations (45/154, 29%). Among nonscreeners, key barriers included no recommendation from a doctor (39/74, 53%), absence of symptoms (27/74, 36%), and perceived age ineligibility (17/74, 23%). Pain, long appointment waits, and fear of radiation were also mentioned. In total, 18 women (mean age 45.3 years, SD 13.6) participated in 6 focus groups. Participants emphasized the importance of respectful and empathetic interactions with medical staff, clear communication, and emotional comfort-factors they perceived as more influential than the screening technology itself. ThermoBreast was positively received for being contactless, radiation-free, and potentially more comfortable. Participants described it as "less traumatic," "easier," and "a game changer." However, concerns were raised regarding its novelty, lack of clinical validation, and data privacy. Some participants expressed the need for human oversight in AI-supported procedures and requested more information on how AI is used. Based on these insights, an updated screening journey was developed, highlighting improvements in preparation, appointment booking, privacy, and communication of results. While AI-driven thermography shows promise as a noninvasive, user-friendly alternative to mammography, its adoption depends on trust, clinical validation, and effective communication from health care professionals. It may expand screening access for populations underserved by mammography, such as younger and immobile women, but does not eliminate all participation barriers. Long-term studies and direct comparisons between mammography and thermography are needed to assess diagnostic accuracy, patient experience, and their impact on screening participation and outcomes.

Attention Gated-VGG with deep learning-based features for Alzheimer's disease classification.

Moorthy DK, Nagaraj P

pubmed logopapersSep 10 2025
Alzheimer's disease (AD) is considered to be one of the neurodegenerative diseases with possible cognitive deficits related to dementia in human subjects. High priority should be put on efforts aimed at early detection of AD. Here, images undergo a pre-processing phase that integrates image resizing and the application of median filters. After that, processed images are subjected to data augmentation procedures. Feature extraction from WOA-based ResNet, together with extracted convolutional neural network (CNN) features from pre-processed images, is used to train proposed DL model to classify AD. The process is executed using the proposed Attention Gated-VGG model. The proposed method outperformed normal methodologies when tested and achieved an accuracy of 96.7%, sensitivity of 97.8%, and specificity of 96.3%. The results have proven that Attention Gated-VGG model is a very promising technique for classifying AD.

3D-CNN Enhanced Multiscale Progressive Vision Transformer for AD Diagnosis.

Huang F, Chen N, Qiu A

pubmed logopapersSep 10 2025
Vision Transformer (ViT) applied to structural magnetic resonance images has demonstrated success in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, three key challenges have yet to be well addressed: 1) ViT requires a large labeled dataset to mitigate overfitting while most of the current AD-related sMRI data fall short in the sample sizes. 2) ViT neglects the within-patch feature learning, e.g., local brain atrophy, which is crucial for AD diagnosis. 3) While ViT can enhance capturing local features by reducing the patch size and increasing the number of patches, the computational complexity of ViT quadratically increases with the number of patches with unbearable overhead. To this end, this paper proposes a 3D-convolutional neural network (CNN) Enhanced Multiscale Progressive ViT (3D-CNN-MPVT). First, a 3D-CNN is pre-trained on sMRI data to extract detailed local image features and alleviate overfitting. Second, an MPVT module is proposed with an inner CNN module to explicitly characterize the within-patch interactions that are conducive to AD diagnosis. Third, a stitch operation is proposed to merge cross-patch features and progressively reduce the number of patches. The inner CNN alongside the stitch operation in the MPTV module enhances local feature characterization while mitigating computational costs. Evaluations using the Alzheimer's Disease Neuroimaging Initiative dataset with 6610 scans and the Open Access Series of Imaging Studies-3 with 1866 scans demonstrated its superior performance. With minimal preprocessing, our approach achieved an impressive 90% accuracy and 80% in AD classification and MCI conversion prediction, surpassing recent baselines.

CT-Based Radiomics Models with External Validation for Prediction of Recurrence and Disease-Specific Mortality After Radical Surgery of Colorectal Liver Metastases.

Marzi S, Vidiri A, Ianiro A, Parrino C, Ruggiero S, Trobiani C, Teodoli L, Vallati G, Trillò G, Ciolina M, Sperati F, Scarinci A, Virdis M, Busset MDD, Stecca T, Massani M, Morana G, Grazi GL

pubmed logopapersSep 10 2025
To build computed tomography (CT)-based radiomics models, with independent external validation, to predict recurrence and disease-specific mortality in patients with colorectal liver metastases (CRLM) who underwent liver resection. 113 patients were included in this retrospective study: the internal training cohort comprised 66 patients, while the external validation cohort comprised 47. All patients underwent a CT study before surgery. Up to five visible metastases, the whole liver volume, and the surrounding free-disease parenchymal liver were separately delineated on the portal venous phase of CT. Both radiomic features and baseline clinical parameters were considered in the models' building, using different families of machine learning (ML) algorithms. The Support Vector Machine and Naive Bayes ML classifiers provided the best predictive performance. A relevant role of second-order and higher-order texture features emerged from the largest lesion and the liver residual parenchyma. The prediction models for recurrence showed good accuracy, ranging from 70% to 78% and from 66% to 70% in the training and validation sets, respectively. Models for predicting disease-related mortality performed worse, with accuracies ranging from 67% to 73% and from 60% to 64% in the training and validation sets, respectively. CT-based radiomics, alone or in combination with baseline clinical data, allowed the prediction of recurrence and disease-specific mortality of patients with CRLM, with fair to good accuracy after validation in an external cohort. Further investigations with a larger patient population for training and validation are needed to corroborate our analyses.

A Lightweight CNN Approach for Hand Gesture Recognition via GAF Encoding of A-mode Ultrasound Signals.

Shangguan Q, Lian Y, Liao Z, Chen J, Song Y, Yao L, Jiang C, Lu Z, Lin Z

pubmed logopapersSep 10 2025
Hand gesture recognition(HGR) is a key technology in human-computer interaction and human communication. This paper presents a lightweight, parameter-free attention convolutional neural network (LPA-CNN) approach leveraging Gramian Angular Field(GAF)transformation of A-mode ultrasound signals for HGR. First, this paper maps 1-dimensional (1D) A-mode ultrasound signals, collected from the forearm muscles of 10 healthy participants, into 2-dimensional (2D) images. Second, GAF is selected owing to its higher sensitivity against Markov Transition Field (MTF) and Recurrence Plot (RP) in HGR. Third, a novel LPA-CNN consisting of four components, i.e., a convolution-pooling block, an attention mechanism, an inverted residual block, and a classification block, is proposed. Among them, the convolution-pooling block consists of convolutional and pooling layers, the attention mechanism is applied to generate 3-D weights, the inverted residual block consists of multiple channel shuffling units, and the classification block is performed through fully connected layers. Fourth, comparative experiments were conducted on GoogLeNet, MobileNet, and LPA-CNN to validate the effectiveness of the proposed method. Experimental results show that compared to GoogLeNet and MobileNet, LPA-CNN has a smaller model size and better recognition performance, achieving a classification accuracy of 0.98 ±0.02. This paper achieves efficient and high-accuracy HGR by encoding A-mode ultrasound signals into 2D images and integrating the LPA-CNN model, providing a new technological approach for HGR based on ultrasonic signals.

Non-invasive prediction of invasive lung adenocarcinoma and high-risk histopathological characteristics in resectable early-stage adenocarcinoma by [18F]FDG PET/CT radiomics-based machine learning models: a prospective cohort Study.

Cao X, Lv Z, Li Y, Li M, Hu Y, Liang M, Deng J, Tan X, Wang S, Geng W, Xu J, Luo P, Zhou M, Xiao W, Guo M, Liu J, Huang Q, Hu S, Sun Y, Lan X, Jin Y

pubmed logopapersSep 10 2025
Precise preoperative discrimination of invasive lung adenocarcinoma (IA) from preinvasive lesions (adenocarcinoma in situ [AIS]/minimally invasive adenocarcinoma [MIA]) and prediction of high-risk histopathological features are critical for optimizing resection strategies in early-stage lung adenocarcinoma (LUAD). In this multicenter study, 813 LUAD patients (tumors ≤3 cm) formed the training cohort. A total of 1,709 radiomic features were extracted from the PET/CT images. Feature selection was performed using the max-relevance and min-redundancy (mRMR) algorithm and least absolute shrinkage and selection operator (LASSO). Hybrid machine learning models integrating [18F]FDG PET/CT radiomics and clinical-radiological features were developed using H2O.ai AutoML. Models were validated in a prospective internal cohort (N = 256, 2021-2022) and external multicenter cohort (N = 418). Performance was assessed via AUC, calibration, decision curve analysis (DCA) and survival assessment. The hybrid model achieved AUCs of 0.93 (95% CI: 0.90-0.96) for distinguishing IA from AIS/MIA (internal test) and 0.92 (0.90-0.95) in external testing. For predicting high-risk histopathological features (grade-III, lymphatic/pleural/vascular/nerve invasion, STAS), AUCs were 0.82 (0.77-0.88) and 0.85 (0.81-0.89) in internal/external sets. DCA confirmed superior net benefit over CT model. The model stratified progression-free (P = 0.002) and overall survival (P = 0.017) in the TCIA cohort. PET/CT radiomics-based models enable accurate non-invasive prediction of invasiveness and high-risk pathology in early-stage LUAD, guiding optimal surgical resection.

RepViT-CXR: A Channel Replication Strategy for Vision Transformers in Chest X-ray Tuberculosis and Pneumonia Classification

Faisal Ahmed

arxiv logopreprintSep 10 2025
Chest X-ray (CXR) imaging remains one of the most widely used diagnostic tools for detecting pulmonary diseases such as tuberculosis (TB) and pneumonia. Recent advances in deep learning, particularly Vision Transformers (ViTs), have shown strong potential for automated medical image analysis. However, most ViT architectures are pretrained on natural images and require three-channel inputs, while CXR scans are inherently grayscale. To address this gap, we propose RepViT-CXR, a channel replication strategy that adapts single-channel CXR images into a ViT-compatible format without introducing additional information loss. We evaluate RepViT-CXR on three benchmark datasets. On the TB-CXR dataset,our method achieved an accuracy of 99.9% and an AUC of 99.9%, surpassing prior state-of-the-art methods such as Topo-CXR (99.3% accuracy, 99.8% AUC). For the Pediatric Pneumonia dataset, RepViT-CXR obtained 99.0% accuracy, with 99.2% recall, 99.3% precision, and an AUC of 99.0%, outperforming strong baselines including DCNN and VGG16. On the Shenzhen TB dataset, our approach achieved 91.1% accuracy and an AUC of 91.2%, marking a performance improvement over previously reported CNN-based methods. These results demonstrate that a simple yet effective channel replication strategy allows ViTs to fully leverage their representational power on grayscale medical imaging tasks. RepViT-CXR establishes a new state of the art for TB and pneumonia detection from chest X-rays, showing strong potential for deployment in real-world clinical screening systems.

Symmetry Interactive Transformer with CNN Framework for Diagnosis of Alzheimer's Disease Using Structural MRI

Zheng Yang, Yanteng Zhang, Xupeng Kou, Yang Liu, Chao Ren

arxiv logopreprintSep 10 2025
Structural magnetic resonance imaging (sMRI) combined with deep learning has achieved remarkable progress in the prediction and diagnosis of Alzheimer's disease (AD). Existing studies have used CNN and transformer to build a well-performing network, but most of them are based on pretraining or ignoring the asymmetrical character caused by brain disorders. We propose an end-to-end network for the detection of disease-based asymmetric induced by left and right brain atrophy which consist of 3D CNN Encoder and Symmetry Interactive Transformer (SIT). Following the inter-equal grid block fetch operation, the corresponding left and right hemisphere features are aligned and subsequently fed into the SIT for diagnostic analysis. SIT can help the model focus more on the regions of asymmetry caused by structural changes, thus improving diagnostic performance. We evaluated our method based on the ADNI dataset, and the results show that the method achieves better diagnostic accuracy (92.5\%) compared to several CNN methods and CNNs combined with a general transformer. The visualization results show that our network pays more attention in regions of brain atrophy, especially for the asymmetric pathological characteristics induced by AD, demonstrating the interpretability and effectiveness of the method.

RetiGen: Framework leveraging domain generalization and test-time adaptation for multi-view retinal diagnostics.

Zhang G, Chen Z, Huo J, do Rio JN, Komninos C, Liu Y, Sparks R, Ourselin S, Bergeles C, Jackson TL

pubmed logopapersSep 10 2025
Domain generalization techniques involve training a model on one set of domains and evaluating its performance on different, unseen domains. In contrast, test-time adaptation optimizes the model specifically for the target domain during inference. Both approaches improve diagnostic accuracy in medical imaging models. However, no research to date has leveraged the advantages of both approaches in an end-to-end fashion. Our paper introduces RetiGen, a test-time optimization framework designed to be integrated with existing domain generalization approaches. With an emphasis on the ophthalmic imaging domain, RetiGen leverages unlabeled multi-view color fundus photographs-a critical optical technology in retinal diagnostics. By utilizing information from multiple viewing angles, our approach significantly enhances the robustness and accuracy of machine learning models when applied across different domains. By integrating class balancing, test-time adaptation, and a multi-view optimization strategy, RetiGen effectively addresses the persistent issue of domain shift, which often hinders the performance of imaging models. Experimental results demonstrate that our method outperforms state-of-the-art techniques in both domain generalization and test-time optimization. Specifically, RetiGen increases the generalizability of the MFIDDR dataset, improving the AUC from 0.751 to 0.872, a 0.121 improvement. Similarly, for the DRTiD dataset, the AUC increased from 0.794 to 0.879, a 0.085 improvement. The code for RetiGen is publicly available at https://github.com/RViMLab/RetiGen.

A Fusion Model of ResNet and Vision Transformer for Efficacy Prediction of HIFU Treatment of Uterine Fibroids.

Zhou Y, Xu H, Jiang W, Zhang J, Chen S, Yang S, Xiang H, Hu W, Qiao X

pubmed logopapersSep 10 2025
High-intensity focused ultrasound (HIFU) is a non-invasive technique for treating uterine fibroids, and the accurate prediction of its therapeutic efficacy depends on precise quantification of the intratumoral heterogeneity. However, existing methods still have limitations in characterizing intratumoral heterogeneity, which restricts the accuracy of efficacy prediction. To this end, this study proposes a deep learning model with a parallel architecture of ResNet and ViT (Res-ViT) to verify whether the synergistic characterization of local texture and global spatial features can improve the accuracy of HIFU efficacy prediction. This study enrolled patients with uterine fibroids who underwent HIFU treatment from Center A (training set: N = 272; internal validation set: N = 92) and Center B (external test set: N = 125). Preoperative T2-weighted magnetic resonance images were used to develop the Res-ViT model for predicting immediate post-treatment non-perfused volume ratio (NPVR) ≥ 80%. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC) and compared against independent Radiomics, ResNet-18, and ViT models. The Res-ViT model outperformed all standalone models across both internal (AUC = 0.895, 95% CI: 0.857-0.987) and external (AUC = 0.853, 95% CI: 0.776-0.921) test sets. SHAP analysis identified the ResNet branch as the predominant decision-making component (feature contribution: 55.4%). The visualization of Gradient-weighted Class Activation Mapping (Grad-CAM) shows that the key regions attended by Res-ViT have higher spatial overlap with the postoperative non-ablated fibroid tissue. The proposed Res-ViT model demonstrates that the fusion strategy of local and global features is an effective method for quantifying uterine fibroid heterogeneity, significantly enhancing the accuracy of HIFU efficacy prediction.
Page 33 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.