Sort by:
Page 27 of 66652 results

Enhancing InceptionResNet to Diagnose COVID-19 from Medical Images.

Aljawarneh S, Ray I

pubmed logopapersJul 24 2025
This investigation delves into the diagnosis of COVID-19, using X-ray images generated by way of an effective deep learning model. In terms of assessing the COVID-19 diagnosis learning model, the methods currently employed tend to focus on the accuracy rate level, while neglecting several significant assessment parameters. These parameters, which include precision, sensitivity and specificity, significantly, F1-score, and ROC-AUC influence the performance level of the model. In this paper, we have improved the InceptionResNet and called Enhanced InceptionResNet with restructured parameters termed, "Enhanced InceptionResNet," which incorporates depth-wise separable convolutions to enhance the efficiency of feature extraction and minimize the consumption of computational resources. For this investigation, three residual network (ResNet) models, namely Res- Net, InceptionResNet model, and the Enhanced InceptionResNet with restructured parameters, were employed for a medical image classification assignment. The performance of each model was evaluated on a balanced dataset of 2600 X-ray images. The models were subsequently assessed for accuracy and loss, as well subjected to a confusion matrix analysis. The Enhanced InceptionResNet consistently outperformed ResNet and InceptionResNet in terms of validation and testing accuracy, recall, precision, F1-score, and ROC-AUC demonstrating its superior capacity for identifying pertinent information in the data. In the context of validation and testing accuracy, our Enhanced InceptionRes- Net repeatedly proved to be more reliable than ResNet, an indication of the former's capacity for the efficient identification of pertinent information in the data (99.0% and 98.35%, respectively), suggesting enhanced feature extraction capabilities. The Enhanced InceptionResNet excelled in COVID-19 diagnosis from chest X-rays, surpassing ResNet and Default InceptionResNet in accuracy, precision, and sensitivity. Despite computational demands, it shows promise for medical image classification. Future work should leverage larger datasets, cloud platforms, and hyperparameter optimisation to improve performance, especially for distinguishing normal and pneumonia cases.

Disease probability-enhanced follow-up chest X-ray radiology report summary generation.

Wang Z, Deng Q, So TY, Chiu WH, Lee K, Hui ES

pubmed logopapersJul 24 2025
A chest X-ray radiology report describes abnormal findings not only from X-ray obtained at a given examination, but also findings on disease progression or change in device placement with reference to the X-ray from previous examination. Majority of the efforts on automatic generation of radiology report pertain to reporting the former, but not the latter, type of findings. To the best of the authors' knowledge, there is only one work dedicated to generating summary of the latter findings, i.e., follow-up radiology report summary. In this study, we propose a transformer-based framework to tackle this task. Motivated by our observations on the significance of medical lexicon on the fidelity of report summary generation, we introduce two mechanisms to bestow clinical insight to our model, namely disease probability soft guidance and masked entity modeling loss. The former mechanism employs a pretrained abnormality classifier to guide the presence level of specific abnormalities, while the latter directs the model's attention toward medical lexicon. Extensive experiments were conducted to demonstrate that the performance of our model exceeded the state-of-the-art.

A Lightweight Hybrid DL Model for Multi-Class Chest X-ray Classification for Pulmonary Diseases.

Precious JG, S R, B SP, R R V, M SSM, Sapthagirivasan V

pubmed logopapersJul 24 2025
Pulmonary diseases have become one of the main reasons for people's health decline, impacting millions of people worldwide. Rapid advancement of deep learning has significantly impacted medical image analysis by improving diagnostic accuracy and efficiency. Timely and precise diagnosis of these diseases proves to be invaluable for effective treatment procedures. Chest X-rays (CXR) perform a pivotal role in diagnosing various respiratory diseases by offering valuable insights into the chest and lung regions. This study puts forth a hybrid approach for classifying CXR images into four classes namely COVID-19, tuberculosis, pneumonia, and normal (healthy) cases. The presented method integrates a machine learning method, Support Vector Machine (SVM), with a pre-trained deep learning model for improved classification accuracy and reduced training time. Data from a number of public sources was used in this study, which represents a wide range of demographics. Class weights were implemented during training to balance the contribution of each class in order to address the class imbalance. Several pre-trained architectures, namely DenseNet, MobileNet, EfficientNetB0, and EfficientNetB3, have been investigated, and their performance was evaluated. Since MobileNet achieved the best classification accuracy of 94%, it was opted for the hybrid model, which combines MobileNet with SVM classifier, increasing the accuracy to 97%. The results suggest that this approach is reliable and holds great promise for clinical applications.&#xD.

An approach for cancer outcomes modelling using a comprehensive synthetic dataset.

Tu L, Choi HHF, Clark H, Lloyd SAM

pubmed logopapersJul 24 2025
Limited patient data availability presents a challenge for efficient machine learning (ML) model development. Recent studies have proposed methods to generate synthetic medical images but lack the corresponding prognostic information required for predicting outcomes. We present a cancer outcomes modelling approach that involves generating a comprehensive synthetic dataset which can accurately mimic a real dataset. A real public dataset containing computed tomography-based radiomic features and clinical information for 132 non-small cell lung cancer patients was used. A synthetic dataset of virtual patients was synthesized using a conditional tabular generative adversarial network. Models to predict two-year overall survival were trained on real or synthetic data using combinations of four feature selection methods (mutual information, ANOVA F-test, recursive feature elimination, random forest (RF) importance weights) and six ML algorithms (RF, k-nearest neighbours, logistic regression, support vector machine, XGBoost, Gaussian Naïve Bayes). Models were tested on withheld real data and externally validated. Real and synthetic datasets were similar, with an average one minus Kolmogorov-Smirnov test statistic of 0.871 for continuous features. Chi-square test confirmed agreement for discrete features (p < 0.001). XGBoost using RF importance-based features performed the most consistently for both datasets, with percent differences in balanced accuracy and area under the precision-recall curve of < 1.3%. Preliminary findings demonstrate the potential application of synthetic radiomic and clinical data augmentation for cancer outcomes modelling, although further validation with larger diverse datasets is crucial. While our approach was described in a lung context, it may be applied to other sites or endpoints.

Differential-UMamba: Rethinking Tumor Segmentation Under Limited Data Scenarios

Dhruv Jain, Romain Modzelewski, Romain Hérault, Clement Chatelain, Eva Torfeh, Sebastien Thureau

arxiv logopreprintJul 24 2025
In data-scarce scenarios, deep learning models often overfit to noise and irrelevant patterns, which limits their ability to generalize to unseen samples. To address these challenges in medical image segmentation, we introduce Diff-UMamba, a novel architecture that combines the UNet framework with the mamba mechanism for modeling long-range dependencies. At the heart of Diff-UMamba is a Noise Reduction Module (NRM), which employs a signal differencing strategy to suppress noisy or irrelevant activations within the encoder. This encourages the model to filter out spurious features and enhance task-relevant representations, thereby improving its focus on clinically meaningful regions. As a result, the architecture achieves improved segmentation accuracy and robustness, particularly in low-data settings. Diff-UMamba is evaluated on multiple public datasets, including MSD (lung and pancreas) and AIIB23, demonstrating consistent performance gains of 1-3% over baseline methods across diverse segmentation tasks. To further assess performance under limited-data conditions, additional experiments are conducted on the BraTS-21 dataset by varying the proportion of available training samples. The approach is also validated on a small internal non-small cell lung cancer (NSCLC) dataset for gross tumor volume (GTV) segmentation in cone beam CT (CBCT), where it achieves a 4-5% improvement over the baseline.

To Compare the Application Value of Different Deep Learning Models Based on CT in Predicting Visceral Pleural Invasion of Non-small Cell Lung Cancer: A Retrospective, Multicenter Study.

Zhu X, Yang Y, Yan C, Xie Z, Shi H, Ji H, He L, Yang T, Wang J

pubmed logopapersJul 23 2025
Visceral pleural invasion (VPI) indicates poor prognosis in non-small cell lung cancer (NSCLC), and upgrades T classification of NSCLC from T1 to T2 when accompanied by VPI. This study aimed to develop and validate deep learning models for the accurate prediction of VPI in patients with NSCLC, and to compare the performance of two-dimensional (2D), three-dimensional (3D), and hybrid 3D models. This retrospective study included consecutive patients with pathologically confirmed lung tumor between June 2017 and September 2022. The clinical data and preoperative imaging features of these patients were investigated and their relationships with VPI were statistically compared. Elastic fiber staining analysis results were the gold standard for diagnosis of VPI. The data of non-VPI and VPI patients were randomly divided into training cohort and validation cohort based on 8:2 and 6:4, respectively. The EfficientNet-B0_2D model and Double-head Res2Net/_F6/_F24 models were constructed, optimized and verified using two convolutional neural network model architectures-EfficientNet-B0 and Res2Net, respectively, by extracting the features of original CT images and combining specific clinical-CT features. The receiver operating characteristic curve, the area under the curve (AUC), and confusion matrix were utilized to assess the diagnostic efficiency of models. Delong test was used to compare performance between models. A total of 1931 patients with NSCLC were finally evaluated. By univariate analysis, 20 clinical-CT features were identified as risk predictors of VPI. Comparison of the diagnostic efficacy among the EfficientNet-b0_2D, Double-head Res2Net, Res2Net_F6, and Res2Net_F24 combined models revealed that Double-head Res2Net_F6 model owned the largest AUC of 0.941 among all models, followed by Double-head Res2Net (AUC=0.879), Double-head Res2Net_F24 (AUC=0.876), and EfficientNet-b0_2D (AUC=0.785). The three 3D-based models showed comparable predictive performance in the validation cohort and all outperformed the 2D model (EfficientNet-B0_2D, all P<0.05). It is feasible to predict VPI in NSCLC with the predictive models based on deep learning, and the Double-head Res2Net_F6 model fused with six clinical-CT features showed greatest diagnostic efficacy.

Artificial Intelligence for Detecting Pulmonary Embolisms <i>via</i> CT: A Workflow-oriented Implementation.

Abed S, Hergan K, Dörrenberg J, Brandstetter L, Lauschmann M

pubmed logopapersJul 23 2025
Detecting Pulmonary Embolism (PE) is critical for effective patient care, and Artificial Intelligence (AI) has shown promise in supporting radiologists in this task. Integrating AI into radiology workflows requires not only evaluation of its diagnostic accuracy but also assessment of its acceptance among clinical staff. This study aims to evaluate the performance of an AI algorithm in detecting pulmonary embolisms (PEs) on contrast-enhanced computed tomography pulmonary angiograms (CTPAs) and to assess the level of acceptance of the algorithm among radiology department staff. This retrospective study analyzed anonymized computed tomography pulmonary angiography (CTPA) data from a university clinic. Surveys were conducted at three and nine months after the implementation of a commercially available AI algorithm designed to flag CTPA scans with suspected PE. A thoracic radiologist and a cardiac radiologist served as the reference standard for evaluating the performance of the algorithm. The AI analyzed 59 CTPA cases during the initial evaluation and 46 cases in the follow-up assessment. In the first evaluation, the AI algorithm demonstrated a sensitivity of 84.6% and a specificity of 94.3%. By the second evaluation, its performance had improved, achieving a sensitivity of 90.9% and a specificity of 96.7%. Radiologists' acceptance of the AI tool increased over time. Nevertheless, despite this growing acceptance, many radiologists expressed a preference for hiring an additional physician over adopting the AI solution if the costs were comparable. Our study demonstrated high sensitivity and specificity of the AI algorithm, with improved performance over time and a reduced rate of unanalyzed scans. These improvements likely reflect both algorithmic refinement and better data integration. Departmental feedback indicated growing user confidence and trust in the tool. However, many radiologists continued to prefer the addition of a resident over reliance on the algorithm. Overall, the AI showed promise as a supportive "second-look" tool in emergency radiology settings. The AI algorithm demonstrated diagnostic performance comparable to that reported in similar studies for detecting PE on CTPA, with both sensitivity and specificity showing improvement over time. Radiologists' acceptance of the algorithm increased throughout the study period, underscoring its potential as a complementary tool to physician expertise in clinical practice.

Interpretable AI Framework for Secure and Reliable Medical Image Analysis in IoMT Systems.

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Faithful, Interpretable Chest X-ray Diagnosis with Anti-Aliased B-cos Networks

Marcel Kleinmann, Shashank Agnihotri, Margret Keuper

arxiv logopreprintJul 22 2025
Faithfulness and interpretability are essential for deploying deep neural networks (DNNs) in safety-critical domains such as medical imaging. B-cos networks offer a promising solution by replacing standard linear layers with a weight-input alignment mechanism, producing inherently interpretable, class-specific explanations without post-hoc methods. While maintaining diagnostic performance competitive with state-of-the-art DNNs, standard B-cos models suffer from severe aliasing artifacts in their explanation maps, making them unsuitable for clinical use where clarity is essential. In this work, we address these limitations by introducing anti-aliasing strategies using FLCPooling (FLC) and BlurPool (BP) to significantly improve explanation quality. Our experiments on chest X-ray datasets demonstrate that the modified $\text{B-cos}_\text{FLC}$ and $\text{B-cos}_\text{BP}$ preserve strong predictive performance while providing faithful and artifact-free explanations suitable for clinical application in multi-class and multi-label settings. Code available at: GitHub repository (url: https://github.com/mkleinma/B-cos-medical-paper).

Divisive Decisions: Improving Salience-Based Training for Generalization in Binary Classification Tasks

Jacob Piland, Chris Sweet, Adam Czajka

arxiv logopreprintJul 22 2025
Existing saliency-guided training approaches improve model generalization by incorporating a loss term that compares the model's class activation map (CAM) for a sample's true-class ({\it i.e.}, correct-label class) against a human reference saliency map. However, prior work has ignored the false-class CAM(s), that is the model's saliency obtained for incorrect-label class. We hypothesize that in binary tasks the true and false CAMs should diverge on the important classification features identified by humans (and reflected in human saliency maps). We use this hypothesis to motivate three new saliency-guided training methods incorporating both true- and false-class model's CAM into the training strategy and a novel post-hoc tool for identifying important features. We evaluate all introduced methods on several diverse binary close-set and open-set classification tasks, including synthetic face detection, biometric presentation attack detection, and classification of anomalies in chest X-ray scans, and find that the proposed methods improve generalization capabilities of deep learning models over traditional (true-class CAM only) saliency-guided training approaches. We offer source codes and model weights\footnote{GitHub repository link removed to preserve anonymity} to support reproducible research.
Page 27 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.