Sort by:
Page 50 of 2352345 results

Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Classification of computed tomography scans: a novel approach implementing an enforced random forest algorithm.

Biondi M, Bortoli E, Marini L, Avitabile R, Bartoli A, Busatti E, Tozzi A, Cimmino MC, Piccini L, Giusti EB, Guasti A

pubmed logopapersAug 28 2025
Medical imaging faces critical challenges in radiation dose management and protocol standardisation. This study introduces a machine learning approach using a random forest algorithm to classify Computed Tomography (CT) scan protocols. By leveraging dose monitoring system data, we provide a data-driven solution for establishing Diagnostic Reference Levels while minimising computational resources. We developed a classification workflow using a Random Forest Classifier to categorise CT scans into anatomical regions: head, thorax, abdomen, spine, and complex multi-region scans (thorax + abdomen and total body). The methodology featured an iterative "human-in-the-loop" refinement process involving data preprocessing, machine learning algorithm training, expert validation, and protocol classification. After training the initial model, we applied the methodology to a new, independent dataset. By analysing 52,982 CT scan records from 11 imaging devices across five hospitals, we train the classificator to distinguish multiple anatomical regions, categorising scans into head, thorax, abdomen, and spine. The final validation on the new database confirmed the model's robustness, achieving a 97 % accuracy. This research introduces a novel medical imaging protocol classification approach by shifting from manual, time-consuming processes to a data-driven approach integrating a random forest algorithm. Our study presents a transformative approach to CT scan protocol classification, demonstrating the potential of data-driven methodologies in medical imaging. We have created a framework for managing protocol classification and establishing DRL by integrating computational intelligence with clinical expertise. Future research will explore applying this methodology to other radiological procedures.

A simple and effective approach for body part recognition on CT scans based on projection estimation.

Hrzic F, Movahhedi M, Lavoie-Gagne O, Kiapour A

pubmed logopapersAug 28 2025
It is well known that machine learning models require a high amount of annotated data to obtain optimal performance. Labelling Computed Tomography (CT) data can be a particularly challenging task due to its volumetric nature and often missing and/or incomplete associated meta-data. Even inspecting one CT scan requires additional computer software, or in the case of programming languages-additional programming libraries. This study proposes a simple, yet effective approach based on 2D X-ray-like estimation of 3D CT scans for body region identification. Although body region is commonly associated with the CT scan, it often describes only the focused major body region neglecting other anatomical regions present in the observed CT. In the proposed approach, estimated 2D images were utilized to identify 14 distinct body regions, providing valuable information for constructing a high-quality medical dataset. To evaluate the effectiveness of the proposed method, it was compared against 2.5D, 3D and foundation model (MI2) based approaches. Our approach outperformed the others, where it came on top with statistical significance and F1-Score for the best-performing model EffNet-B0 of 0.980 ± 0.016 in comparison to the 0.840 ± 0.114 (2.5D DenseNet-161), 0.854 ± 0.096 (3D VoxCNN), and 0.852 ± 0.104 (MI2 foundation model). The utilized dataset comprised three different clinical centers and counted 15,622 CT scans (44,135 labels).

Comparison of Outcomes Between Ablation and Lobectomy in Stage IA Non-Small Cell Lung Cancer: A Retrospective Multicenter Study.

Xu B, Chen Z, Liu D, Zhu Z, Zhang F, Lin L

pubmed logopapersAug 28 2025
Image-guided thermal ablation (IGTA) has been increasingly used in patients with stage IA non-small cell lung cancer (NSCLC) without surgical contraindications, but its long-term outcomes compared to lobectomy remain unknown. This study aims to evaluate the long-term outcomes of IGTA versus lobectomy and explore which patients may benefit most from ablation. After propensity score matching, a total of 290 patients with stage IA NSCLC between 2015 and 2023 were included. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method. A Markov model was constructed to evaluate cost-effectiveness. Finally, a radiomics model based on preoperative computed tomography (CT) was developed to perform risk stratification. After matching, the median follow-up intervals were 34.8 months for the lobectomy group and 47.2 months for the ablation group. There were no significant differences between the groups in terms of 5-year PFS (hazard ratio [HR], 1.83; 95% CI, 0.86-3.92; p = 0.118) or OS (HR, 2.44; 95% CI, 0.87-6.63; p = 0.092). In low-income regions, lobectomy was not cost-effective in 99% of simulations. The CT-based radiomics model outperformed the traditional TNM model (AUC, 0.759 vs. 0.650; p < 0.01). Moreover, disease-free survival was significantly lower in the high-risk group than in the low-risk group (p = 0.009). This study comprehensively evaluated IGTA versus lobectomy in terms of survival outcomes, cost-effectiveness, and prognostic prediction. The findings suggest that IGTA may be a safe and feasible alternative to conventional surgery for carefully selected patients.

Dual-model approach for accurate chest disease detection using GViT and swin transformer V2.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

Deep Learning Framework for Early Detection of Pancreatic Cancer Using Multi-Modal Medical Imaging Analysis

Dennis Slobodzian, Karissa Tilbury, Amir Kordijazi

arxiv logopreprintAug 28 2025
Pacreatic ductal adenocarcinoma (PDAC) remains one of the most lethal forms of cancer, with a five-year survival rate below 10% primarily due to late detection. This research develops and validates a deep learning framework for early PDAC detection through analysis of dual-modality imaging: autofluorescence and second harmonic generation (SHG). We analyzed 40 unique patient samples to create a specialized neural network capable of distinguishing between normal, fibrotic, and cancerous tissue. Our methodology evaluated six distinct deep learning architectures, comparing traditional Convolutional Neural Networks (CNNs) with modern Vision Transformers (ViTs). Through systematic experimentation, we identified and overcome significant challenges in medical image analysis, including limited dataset size and class imbalance. The final optimized framework, based on a modified ResNet architecture with frozen pre-trained layers and class-weighted training, achieved over 90% accuracy in cancer detection. This represents a significant improvement over current manual analysis methods an demonstrates potential for clinical deployment. This work establishes a robust pipeline for automated PDAC detection that can augment pathologists' capabilities while providing a foundation for future expansion to other cancer types. The developed methodology also offers valuable insights for applying deep learning to limited-size medical imaging datasets, a common challenge in clinical applications.

Domain Adaptation Techniques for Natural and Medical Image Classification

Ahmad Chaddad, Yihang Wu, Reem Kateb, Christian Desrosiers

arxiv logopreprintAug 28 2025
Domain adaptation (DA) techniques have the potential in machine learning to alleviate distribution differences between training and test sets by leveraging information from source domains. In image classification, most advances in DA have been made using natural images rather than medical data, which are harder to work with. Moreover, even for natural images, the use of mainstream datasets can lead to performance bias. {With the aim of better understanding the benefits of DA for both natural and medical images, this study performs 557 simulation studies using seven widely-used DA techniques for image classification in five natural and eight medical datasets that cover various scenarios, such as out-of-distribution, dynamic data streams, and limited training samples.} Our experiments yield detailed results and insightful observations highlighting the performance and medical applicability of these techniques. Notably, our results have shown the outstanding performance of the Deep Subdomain Adaptation Network (DSAN) algorithm. This algorithm achieved feasible classification accuracy (91.2\%) in the COVID-19 dataset using Resnet50 and showed an important accuracy improvement in the dynamic data stream DA scenario (+6.7\%) compared to the baseline. Our results also demonstrate that DSAN exhibits remarkable level of explainability when evaluated on COVID-19 and skin cancer datasets. These results contribute to the understanding of DA techniques and offer valuable insight into the effective adaptation of models to medical data.

Mask-Guided Multi-Channel SwinUNETR Framework for Robust MRI Classification

Smriti Joshi, Lidia Garrucho, Richard Osuala, Oliver Diaz, Karim Lekadir

arxiv logopreprintAug 28 2025
Breast cancer is one of the leading causes of cancer-related mortality in women, and early detection is essential for improving outcomes. Magnetic resonance imaging (MRI) is a highly sensitive tool for breast cancer detection, particularly in women at high risk or with dense breast tissue, where mammography is less effective. The ODELIA consortium organized a multi-center challenge to foster AI-based solutions for breast cancer diagnosis and classification. The dataset included 511 studies from six European centers, acquired on scanners from multiple vendors at both 1.5 T and 3 T. Each study was labeled for the left and right breast as no lesion, benign lesion, or malignant lesion. We developed a SwinUNETR-based deep learning framework that incorporates breast region masking, extensive data augmentation, and ensemble learning to improve robustness and generalizability. Our method achieved second place on the challenge leaderboard, highlighting its potential to support clinical breast MRI interpretation. We publicly share our codebase at https://github.com/smriti-joshi/bcnaim-odelia-challenge.git.

Prediction of Distant Metastasis for Head and Neck Cancer Patients Using Multi-Modal Tumor and Peritumoral Feature Fusion Network

Zizhao Tang, Changhao Liu, Nuo Tong, Shuiping Gou, Mei Shi

arxiv logopreprintAug 28 2025
Metastasis remains the major challenge in the clinical management of head and neck squamous cell carcinoma (HNSCC). Reliable pre-treatment prediction of metastatic risk is crucial for optimizing treatment strategies and prognosis. This study develops a deep learning-based multimodal framework to predict metastasis risk in HNSCC patients by integrating computed tomography (CT) images, radiomics, and clinical data. 1497 HNSCC patients were included. Tumor and organ masks were derived from pretreatment CT images. A 3D Swin Transformer extracted deep features from tumor regions. Meanwhile, 1562 radiomics features were obtained using PyRadiomics, followed by correlation filtering and random forest selection, leaving 36 features. Clinical variables including age, sex, smoking, and alcohol status were encoded and fused with imaging-derived features. Multimodal features were fed into a fully connected network to predict metastasis risk. Performance was evaluated using five-fold cross-validation with area under the curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). The proposed fusion model outperformed single-modality models. The 3D deep learning module alone achieved an AUC of 0.715, and when combined with radiomics and clinical features, predictive performance improved (AUC = 0.803, ACC = 0.752, SEN = 0.730, SPE = 0.758). Stratified analysis showed generalizability across tumor subtypes. Ablation studies indicated complementary information from different modalities. Evaluation showed the 3D Swin Transformer provided more robust representation learning than conventional networks. This multimodal fusion model demonstrated high accuracy and robustness in predicting metastasis risk in HNSCC, offering a comprehensive representation of tumor biology. The interpretable model has potential as a clinical decision-support tool for personalized treatment planning.
Page 50 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.