Sort by:
Page 30 of 46453 results

Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles.

Shen X, Huang H, Nichyporuk B, Arbel T

pubmed logopapersJun 30 2025
Once deployed, medical image analysis methods are often faced with unexpected image corruptions and noise perturbations. These unknown covariate shifts present significant challenges to deep learning based methods trained on "clean" images. This often results in unreliable predictions and poorly calibrated confidence, hence hindering clinical applicability. While recent methods have been developed to address specific issues such as confidence calibration or adversarial robustness, no single framework effectively tackles all these challenges simultaneously. To bridge this gap, we propose LaDiNE, a novel ensemble learning method combining the robustness of Vision Transformers with diffusion-based generative models for improved reliability in medical image classification. Specifically, transformer encoder blocks are used as hierarchical feature extractors that learn invariant features from images for each ensemble member, resulting in features that are robust to input perturbations. In addition, diffusion models are used as flexible density estimators to estimate member densities conditioned on the invariant features, leading to improved modeling of complex data distributions while retaining properly calibrated confidence. Extensive experiments on tuberculosis chest X-rays and melanoma skin cancer datasets demonstrate that LaDiNE achieves superior performance compared to a wide range of state-of-the-art methods by simultaneously improving prediction accuracy and confidence calibration under unseen noise, adversarial perturbations, and resolution degradation.

Catheter detection and segmentation in X-ray images via multi-task learning.

Xi L, Ma Y, Koland E, Howell S, Rinaldi A, Rhode KS

pubmed logopapersJun 27 2025
Automated detection and segmentation of surgical devices, such as catheters or wires, in X-ray fluoroscopic images have the potential to enhance image guidance in minimally invasive heart surgeries. In this paper, we present a convolutional neural network model that integrates a resnet architecture with multiple prediction heads to achieve real-time, accurate localization of electrodes on catheters and catheter segmentation in an end-to-end deep learning framework. We also propose a multi-task learning strategy in which our model is trained to perform both accurate electrode detection and catheter segmentation simultaneously. A key challenge with this approach is achieving optimal performance for both tasks. To address this, we introduce a novel multi-level dynamic resource prioritization method. This method dynamically adjusts sample and task weights during training to effectively prioritize more challenging tasks, where task difficulty is inversely proportional to performance and evolves throughout the training process. The proposed method has been validated on both public and private datasets for single-task catheter segmentation and multi-task catheter segmentation and detection. The performance of our method is also compared with existing state-of-the-art methods, demonstrating significant improvements, with a mean <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>J</mi></math> of 64.37/63.97 and with average precision over all IoU thresholds of 84.15/83.13, respectively, for detection and segmentation multi-task on the validation and test sets of the catheter detection and segmentation dataset. Our approach achieves a good balance between accuracy and efficiency, making it well-suited for real-time surgical guidance applications.

<sup>Advanced glaucoma disease segmentation and classification with grey wolf optimized U</sup> <sup>-Net++ and capsule networks</sup>.

Govindharaj I, Deva Priya W, Soujanya KLS, Senthilkumar KP, Shantha Shalini K, Ravichandran S

pubmed logopapersJun 27 2025
Early detection of glaucoma represents a vital factor in securing vision while the disease retains its position as one of the central causes of blindness worldwide. The current glaucoma screening strategies with expert interpretation depend on complex and time-consuming procedures which slow down both diagnosis processes and intervention timing. This research adopts a complex automated glaucoma diagnostic system that combines optimized segmentation solutions together with classification platforms. The proposed segmentation approach implements an enhanced version of U-Net++ using dynamic parameter control provided by GWO to segment optic disc and cup regions in retinal fundus images. Through the implementation of GWO the algorithm uses wolf-pack hunting strategies to adjust parameters dynamically which enables it to locate diverse textural patterns inside images. The system uses a CapsNet capsule network for classification because it maintains visual spatial organization to detect glaucoma-related patterns precisely. The developed system secures an evaluation accuracy of 95.1% in segmentation and classification tasks better than typical approaches. The automated system eliminates and enhances clinical diagnostic speed as well as diagnostic precision. The tool stands out because of its supreme detection accuracy and reliability thus making it an essential clinical early-stage glaucoma diagnostic system and a scalable healthcare deployment solution. To develop an advanced automated glaucoma diagnostic system by integrating an optimized U-Net++ segmentation model with a Capsule Network (CapsNet) classifier, enhanced through Grey Wolf Optimization Algorithm (GWOA), for precise segmentation of optic disc and cup regions and accurate glaucoma classification from retinal fundus images. This study proposes a two-phase computer-assisted diagnosis (CAD) framework. In the segmentation phase, an enhanced U-Net++ model, optimized by GWOA, is employed to accurately delineate the optic disc and cup regions in fundus images. The optimization dynamically tunes hyperparameters based on grey wolf hunting behavior for improved segmentation precision. In the classification phase, a CapsNet architecture is used to maintain spatial hierarchies and effectively classify images as glaucomatous or normal based on segmented outputs. The performance of the proposed model was validated using the ORIGA retinal fundus image dataset, and evaluated against conventional approaches. The proposed GWOA-UNet++ and CapsNet framework achieved a segmentation and classification accuracy of 95.1%, outperforming existing benchmark models such as MTA-CS, ResFPN-Net, DAGCN, MRSNet and AGCT. The model demonstrated robustness against image irregularities, including variations in optic disc size and fundus image quality, and showed superior performance across accuracy, sensitivity, specificity, precision, and F1-score metrics. The developed automated glaucoma detection system exhibits enhanced diagnostic accuracy, efficiency, and reliability, offering significant potential for early-stage glaucoma detection and clinical decision support. Future work will involve large-scale multi-ethnic dataset validation, integration with clinical workflows, and deployment as a mobile or cloud-based screening tool.

Prospective quality control in chest radiography based on the reconstructed 3D human body.

Tan Y, Ye Z, Ye J, Hou Y, Li S, Liang Z, Li H, Tang J, Xia C, Li Z

pubmed logopapersJun 27 2025
Chest radiography requires effective quality control (QC) to reduce high retake rates. However, existing QC measures are all retrospective and implemented after exposure, often necessitating retakes when image quality fails to meet standards and thereby increasing radiation exposure to patients. To address this issue, we proposed a 3D human body (3D-HB) reconstruction algorithm to realize prospective QC. Our objective was to investigate the feasibility of using the reconstructed 3D-HB for prospective QC in chest radiography and evaluate its impact on retake rates.&#xD;Approach: This prospective study included patients indicated for posteroanterior (PA) and lateral (LA) chest radiography in May 2024. A 3D-HB reconstruction algorithm integrating the SMPL-X model and the HybrIK-X algorithm was proposed to convert patients' 2D images into 3D-HBs. QC metrics regarding patient positioning and collimation were assessed using chest radiographs (reference standard) and 3D-HBs, with results compared using ICCs, linear regression, and receiver operating characteristic curves. For retake rate evaluation, a real-time 3D-HB visualization interface was developed and chest radiography was conducted in two four-week phases: the first without prospective QC and the second with prospective QC. Retake rates between the two phases were compared using chi-square tests. &#xD;Main results: 324 participants were included (mean age, 42 years±19 [SD]; 145 men; 324 PA and 294 LA examinations). The ICCs for the clavicle and midaxillary line angles were 0.80 and 0.78, respectively. Linear regression showed good relation for clavicle angles (R2: 0.655) and midaxillary line angles (R2: 0.616). In PA chest radiography, the AUCs of 3D-HBs were 0.89, 0.87, 0.91 and 0.92 for assessing scapula rotation, lateral tilt, centered positioning and central X-ray alignment respectively, with 97% accuracy in collimation assessment. In LA chest radiography, the AUCs of 3D-HBs were 0.87, 0.84, 0.87 and 0.88 for assessing arms raised, chest rotation, centered positioning and central X-ray alignment respectively, with 94% accuracy in collimation assessment. In retake rate evaluation, 3995 PA and 3295 LA chest radiographs were recorded. The implementation of prospective QC based on the 3D-HB reduced retake rates from 8.6% to 3.5% (PA) and 19.6% to 4.9% (LA) (p < .001).&#xD;Significance: The reconstructed 3D-HB is a feasible tool for prospective QC in chest radiography, providing real-time feedback on patient positioning and collimation before exposure. Prospective QC based on the reconstructed 3D-HB has the potential to reshape the future of radiography QC by significantly reducing retake rates and improving clinical standardization.

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Design and Optimization of an automatic deep learning-based cerebral reperfusion scoring (TICI) using thrombus localization.

Folcher A, Piters J, Wallach D, Guillard G, Ognard J, Gentric JC

pubmed logopapersJun 26 2025
The Thrombolysis in Cerebral Infarction (TICI) scale is widely used to assess angiographic outcomes of mechanical thrombectomy despite significant variability. Our objective was to create and optimize an artificial intelligence (AI)-based classification model for digital subtraction angiography (DSA) TICI scoring. Using a monocentric DSA dataset of thrombectomies, and a platform for medical image analysis, independent readers labeled each series according to TICI score and marked each thrombus. A convolutional neural network (CNN) classification model was created to classify TICI scores, into 2 groups (TICI 0,1 or 2a versus TICI 2b, 2c or 3) and 3 groups (TICI 0,1 or 2a versus TICI 2b versus TICI 2c or 3). The algorithm was first tested alone, and then thrombi positions were introduced to the algorithm by manual placement firstly, then after using a thrombus detection module. A total of 422 patients were enrolled in the study. 2492 thrombi were annotated on the TICI-labeled series. The model trained on a total of 1609 DSA series. The classification model into two classes had a specificity of 0.97 ±0.01 and a sensibility of 0.86 ±0.01. The 3-class models showed insufficient performance, even when combined with the true thrombi positions, with, respectively, F1 scores for TICI 2b classification of 0.50 and 0.55 ±0.07. The automatic thrombus detection module did not enhance the performance of the 3-class model, with a F1 score for the TICI 2b class measured at 0.50 ±0.07. The AI model provided a reproducible 2-class (TICI 0,1 or 2a versus 2b, 2c or 3) classification according to TICI scale. Its performance in distinguishing three classes (TICI 0,1 or 2a versus 2b versus 2c or 3) remains insufficient for clinical practice. Automatic thrombus detection did not improve the model's performance.

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

AI-assisted radiographic analysis in detecting alveolar bone-loss severity and patterns

Chathura Wimalasiri, Piumal Rathnayake, Shamod Wijerathne, Sumudu Rasnayaka, Dhanushka Leuke Bandara, Roshan Ragel, Vajira Thambawita, Isuru Nawinne

arxiv logopreprintJun 25 2025
Periodontitis, a chronic inflammatory disease causing alveolar bone loss, significantly affects oral health and quality of life. Accurate assessment of bone loss severity and pattern is critical for diagnosis and treatment planning. In this study, we propose a novel AI-based deep learning framework to automatically detect and quantify alveolar bone loss and its patterns using intraoral periapical (IOPA) radiographs. Our method combines YOLOv8 for tooth detection with Keypoint R-CNN models to identify anatomical landmarks, enabling precise calculation of bone loss severity. Additionally, YOLOv8x-seg models segment bone levels and tooth masks to determine bone loss patterns (horizontal vs. angular) via geometric analysis. Evaluated on a large, expertly annotated dataset of 1000 radiographs, our approach achieved high accuracy in detecting bone loss severity (intra-class correlation coefficient up to 0.80) and bone loss pattern classification (accuracy 87%). This automated system offers a rapid, objective, and reproducible tool for periodontal assessment, reducing reliance on subjective manual evaluation. By integrating AI into dental radiographic analysis, our framework has the potential to improve early diagnosis and personalized treatment planning for periodontitis, ultimately enhancing patient care and clinical outcomes.

Framework for enhanced respiratory disease identification with clinical handcrafted features.

Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ

pubmed logopapersJun 25 2025
Respiratory disorders cause approximately 4 million deaths annually worldwide, making them the third leading cause of mortality. Early detection is critical to improving survival rates and recovery outcomes. However, chest X-rays require expertise, and computational intelligence provides valuable support to improve diagnostic accuracy and support medical professionals in decision-making. This study presents an automated system to classify respiratory diseases using three diverse datasets comprising 18,000 chest X-ray images and masks, categorized into six classes. Image preprocessing techniques, such as resizing for input standardization and CLAHE for contrast enhancement, were applied to ensure uniformity and improve the visual quality of the images. Albumentations-based augmentation methods addressed class imbalances, while bitwise segmentation focused on extracting the region of interest (ROI). Furthermore, clinically handcrafted feature extraction enabled the accurate identification of 20 critical clinical features essential for disease classification. The K-nearest neighbors (KNN) graph construction technique was utilized to transform tabular data into graph structures for effective node classification. We employed feature analysis to identify critical attributes that contribute to class predictions within the graph structure. Additionally, the GNNExplainer was utilized to validate these findings by highlighting significant nodes, edges, and features that influence the model's decision-making process. The proposed model, Chest X-ray Graph Neural Network (CHXGNN), a robust Graph Neural Network (GNN) architecture, incorporates advanced layers, batch normalization, dropout regularization, and optimization strategies. Extensive testing and ablation studies demonstrated the model's exceptional performance, achieving an accuracy of 99.56 %. Our CHXGNN model shows significant potential in detecting and classifying respiratory diseases, promising to enhance diagnostic efficiency and improve patient outcomes in respiratory healthcare.

Diagnostic Performance of Universal versus Stratified Computer-Aided Detection Thresholds for Chest X-Ray-Based Tuberculosis Screening

Sung, J., Kitonsa, P. J., Nalutaaya, A., Isooba, D., Birabwa, S., Ndyabayunga, K., Okura, R., Magezi, J., Nantale, D., Mugabi, I., Nakiiza, V., Dowdy, D. W., Katamba, A., Kendall, E. A.

medrxiv logopreprintJun 24 2025
BackgroundComputer-aided detection (CAD) software analyzes chest X-rays for features suggestive of tuberculosis (TB) and provides a numeric abnormality score. However, estimates of CAD accuracy for TB screening are hindered by the lack of confirmatory data among people with lower CAD scores, including those without symptoms. Additionally, the appropriate CAD score thresholds for obtaining further testing may vary according to population and client characteristics. MethodsWe screened for TB in Ugandan individuals aged [&ge;]15 years using portable chest X-rays with CAD (qXR v3). Participants were offered screening regardless of their symptoms. Those with X-ray scores above a threshold of 0.1 (range, 0 - 1) were asked to provide sputum for Xpert Ultra testing. We estimated the diagnostic accuracy of CAD for detecting Xpert-positive TB when using the same threshold for all individuals (under different assumptions about TB prevalence among people with X-ray scores <0.1), and compared this estimate to age- and/or sex-stratified approaches. FindingsOf 52,835 participants screened for TB using CAD, 8,949 (16.9%) had X-ray scores [&ge;]0.1. Of 7,219 participants with valid Xpert Ultra results, 382 (5.3%) were Xpert-positive, including 81 with trace results. Assuming 0.1% of participants with X-ray scores <0.1 would have been Xpert-positive if tested, qXR had an estimated AUC of 0.920 (95% confidence interval 0.898-0.941) for Xpert-positive TB. Stratifying CAD thresholds according to age and sex improved accuracy; for example, at 96.1% specificity, estimated sensitivity was 75.0% for a universal threshold (of [&ge;]0.65) versus 76.9% for thresholds stratified by age and sex (p=0.046). InterpretationThe accuracy of CAD for TB screening among all screening participants, including those without symptoms or abnormal chest X-rays, is higher than previously estimated. Stratifying CAD thresholds based on client characteristics such as age and sex could further improve accuracy, enabling a more effective and personalized approach to TB screening. FundingNational Institutes of Health Research in contextO_ST_ABSEvidence before this studyC_ST_ABSThe World Health Organization (WHO) has endorsed computer-aided detection (CAD) as a screening tool for tuberculosis (TB), but the appropriate CAD score that triggers further diagnostic evaluation for tuberculosis varies by population. The WHO recommends determining the appropriate CAD threshold for specific settings and population and considering unique thresholds for specific populations, including older age groups, among whom CAD may perform poorly. We performed a PubMed literature search for articles published until September 9, 2024, using the search terms "tuberculosis" AND ("computer-aided detection" OR "computer aided detection" OR "CAD" OR "computer-aided reading" OR "computer aided reading" OR "artificial intelligence"), which resulted in 704 articles. Among them, we identified studies that evaluated the performance of CAD for tuberculosis screening and additionally reviewed relevant references. Most prior studies reported area under the curves (AUC) ranging from 0.76 to 0.88 but limited their evaluations to individuals with symptoms or abnormal chest X-rays. Some prior studies identified subgroups (including older individuals and people with prior TB) among whom CAD had lower-than-average AUCs, and authors discussed how the prevalence of such characteristics could affect the optimal value of a population-wide CAD threshold; however, none estimated the accuracy that could be gained with adjusting CAD thresholds between individuals based on personal characteristics. Added value of this studyIn this study, all consenting individuals in a high-prevalence setting were offered chest X-ray screening, regardless of symptoms, if they were [&ge;]15 years old, not pregnant, and not on TB treatment. A very low CAD score cutoff (qXR v3 score of 0.1 on a 0-1 scale) was used to select individuals for confirmatory sputum molecular testing, enabling the detection of radiographically mild forms of TB and facilitating comparisons of diagnostic accuracy at different CAD thresholds. With this more expansive, symptom-neutral evaluation of CAD, we estimated an AUC of 0.920, and we found that the qXR v3 threshold needed to decrease to under 0.1 to meet the WHO target product profile goal of [&ge;]90% sensitivity and [&ge;]70% specificity. Compared to using the same thresholds for all participants, adjusting CAD thresholds by age and sex strata resulted in a 1 to 2% increase in sensitivity without affecting specificity. Implications of all the available evidenceTo obtain high sensitivity with CAD screening in high-prevalence settings, low score thresholds may be needed. However, countries with a high burden of TB often do not have sufficient resources to test all individuals above a low threshold. In such settings, adjusting CAD thresholds based on individual characteristics associated with TB prevalence (e.g., male sex) and those associated with false-positive X-ray results (e.g., old age) can potentially improve the efficiency of TB screening programs.
Page 30 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.