Sort by:
Page 28 of 46453 results

Performance of artificial intelligence in evaluating maxillary sinus mucosal alterations in imaging examinations: systematic review.

Moreira GC, do Carmo Ribeiro CS, Verner FS, Lemos CAA

pubmed logopapersJul 1 2025
This systematic review aimed to assess the performance of artificial intelligence (AI) in the evaluation of maxillary sinus mucosal alterations in imaging examinations compared to human analysis. Studies that presented radiographic images for the diagnosis of paranasal sinus diseases, as well as control groups for AI, were included. Articles that performed tests on animals, presented other conditions, surgical methods, did not present data on the diagnosis of MS or on the outcomes of interest (area under the curve, sensitivity, specificity, and accuracy), compared the outcome only among different AIs were excluded. Searches were conducted in 5 electronic databases and a gray literature. The risk of bias (RB) was assessed using the QUADAS-2 and the certainty of evidence by GRADE. Six studies were included. The type of study considered was retrospective observational; with serious RB, and a considerable heterogeneity in methodologies. The IA presents similar results to humans, however, imprecision was assessed as serious for the outcomes and the certainty of evidence was classified as very low according to the GRADE approach. Furthermore, a dose-response effect was determined, as specialists demonstrate greater mastery of the diagnosis of MS when compared to resident professionals or general clinicians. Considering the outcomes, the AI represents a complementary tool for assessing maxillary mucosal alterations, especially considering professionals with less experience. Finally, performance analysis and definition of comparison parameters should be encouraged considering future research perspectives. AI is a potential complementary tool for assessing maxillary sinus mucosal alterations, however studies are still lacking methodological standardization.

Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy.

Ahmed AM, Madden L, Stewart M, Chow BVY, Mylonas A, Brown R, Metz G, Shepherd M, Coronel C, Ambrose L, Turk A, Crispin M, Kneebone A, Hruby G, Keall P, Booth JT

pubmed logopapersJul 1 2025
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images. Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95). The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour. The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.

An efficient attention Densenet with LSTM for lung disease detection and classification using X-ray images supported by adaptive R2-Unet-based image segmentation.

Betha SK, Dev DR, Sunkara K, Kodavanti PV, Putta A

pubmed logopapersJul 1 2025
Lung diseases represent one of the most prevalent health challenges globally, necessitating accurate diagnosis to improve patient outcomes. This work presents a novel deep learning-aided lung disease classification framework comprising three key phases: image acquisition, segmentation, and classification. Initially, chest X-ray images are taken from standard datasets. The lung regions are segmented using an Adaptive Recurrent Residual U-Net (AR2-UNet), whose parameters are optimised using Enhanced Pufferfish Optimisation Algorithm (EPOA) to enhance segmentation accuracy. The segmented images are processed using "Attention-based Densenet with Long Short Term Memory(ADNet-LSTM)" for robust categorisation. Investigational results demonstrate that the proposed model achieves the highest classification accuracy of 93.92%, significantly outperforming several baseline models including ResNet with 90.77%, Inception with 89.55%, DenseNet with 89.66%, and "Long Short Term Memory (LSTM)" with 91.79%. Thus, the proposed framework offers a dependable and efficient solution for lung disease detection, supporting clinicians in early and accurate diagnosis.

Improving Tuberculosis Detection in Chest X-Ray Images Through Transfer Learning and Deep Learning: Comparative Study of Convolutional Neural Network Architectures.

Mirugwe A, Tamale L, Nyirenda J

pubmed logopapersJul 1 2025
Tuberculosis (TB) remains a significant global health challenge, as current diagnostic methods are often resource-intensive, time-consuming, and inaccessible in many high-burden communities, necessitating more efficient and accurate diagnostic methods to improve early detection and treatment outcomes. This study aimed to evaluate the performance of 6 convolutional neural network architectures-Visual Geometry Group-16 (VGG16), VGG19, Residual Network-50 (ResNet50), ResNet101, ResNet152, and Inception-ResNet-V2-in classifying chest x-ray (CXR) images as either normal or TB-positive. The impact of data augmentation on model performance, training times, and parameter counts was also assessed. The dataset of 4200 CXR images, comprising 700 labeled as TB-positive and 3500 as normal cases, was used to train and test the models. Evaluation metrics included accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve. The computational efficiency of each model was analyzed by comparing training times and parameter counts. VGG16 outperformed the other architectures, achieving an accuracy of 99.4%, precision of 97.9%, recall of 98.6%, F1-score of 98.3%, and area under the receiver operating characteristic curve of 98.25%. This superior performance is significant because it demonstrates that a simpler model can deliver exceptional diagnostic accuracy while requiring fewer computational resources. Surprisingly, data augmentation did not improve performance, suggesting that the original dataset's diversity was sufficient. Models with large numbers of parameters, such as ResNet152 and Inception-ResNet-V2, required longer training times without yielding proportionally better performance. Simpler models like VGG16 offer a favorable balance between diagnostic accuracy and computational efficiency for TB detection in CXR images. These findings highlight the need to tailor model selection to task-specific requirements, providing valuable insights for future research and clinical implementations in medical image classification.

Does alignment alone predict mechanical complications after adult spinal deformity surgery? A machine learning comparison of alignment, bone quality, and soft tissue.

Sundrani S, Doss DJ, Johnson GW, Jain H, Zakieh O, Wegner AM, Lugo-Pico JG, Abtahi AM, Stephens BF, Zuckerman SL

pubmed logopapersJul 1 2025
Mechanical complications are a vexing occurrence after adult spinal deformity (ASD) surgery. While achieving ideal spinal alignment in ASD surgery is critical, alignment alone may not fully explain all mechanical complications. The authors sought to determine which combination of inputs produced the most sensitive and specific machine learning model to predict mechanical complications using postoperative alignment, bone quality, and soft tissue data. A retrospective cohort study was performed in patients undergoing ASD surgery from 2009 to 2021. Inclusion criteria were a fusion ≥ 5 levels, sagittal/coronal deformity, and at least 2 years of follow-up. The primary exposure variables were 1) alignment, evaluated in both the sagittal and coronal planes using the L1-pelvic angle ± 3°, L4-S1 lordosis, sagittal vertical axis, pelvic tilt, and coronal vertical axis; 2) bone quality, evaluated by the T-score from a dual-energy x-ray absorptiometry scan; and 3) soft tissue, evaluated by the paraspinal muscle-to-vertebral body ratio and fatty infiltration. The primary outcome was mechanical complications. Alongside demographic data in each model, 7 machine learning models with all combinations of domains (alignment, bone quality, and soft tissue) were trained. The positive predictive value (PPV) was calculated for each model. Of 231 patients (24% male) undergoing ASD surgery with a mean age of 64 ± 17 years, 147 (64%) developed at least one mechanical complication. The model with alignment alone performed poorly, with a PPV of 0.85. However, the model with alignment, bone quality, and soft tissue achieved a high PPV of 0.90, sensitivity of 0.67, and specificity of 0.84. Moreover, the model with alignment alone failed to predict 15 complications of 100, whereas the model with all three domains only failed to predict 10 of 100. These results support the notion that not every mechanical failure is explained by alignment alone. The authors found that a combination of alignment, bone quality, and soft tissue provided the most accurate prediction of mechanical complications after ASD surgery. While achieving optimal alignment is essential, additional data including bone and soft tissue are necessary to minimize mechanical complications.

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.

Deep learning algorithm enables automated Cobb angle measurements with high accuracy.

Hayashi D, Regnard NE, Ventre J, Marty V, Clovis L, Lim L, Nitche N, Zhang Z, Tournier A, Ducarouge A, Kompel AJ, Tannoury C, Guermazi A

pubmed logopapersJul 1 2025
To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs. Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE). A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults. The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Lee S, Youn J, Kim H, Kim M, Yoon SH

pubmed logopapersJul 1 2025
This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model's diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model's F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.

Automated Scoliosis Cobb Angle Classification in Biplanar Radiograph Imaging With Explainable Machine Learning Models.

Yu J, Lahoti YS, McCandless KC, Namiri NK, Miyasaka MS, Ahmed H, Song J, Corvi JJ, Berman DC, Cho SK, Kim JS

pubmed logopapersJul 1 2025
Retrospective cohort study. To quantify the pathology of the spine in patients with scoliosis through one-dimensional feature analysis. Biplanar radiograph (EOS) imaging is a low-dose technology offering high-resolution spinal curvature measurement, crucial for assessing scoliosis severity and guiding treatment decisions. Machine learning (ML) algorithms, utilizing one-dimensional image features, can enable automated Cobb angle classification, improving accuracy and efficiency in scoliosis evaluation while reducing the need for manual measurements, thus supporting clinical decision-making. This study used 816 annotated AP EOS spinal images with a spine segmentation mask and a 10° polynomial to represent curvature. Engineered features included the first and second derivatives, Fourier transform, and curve energy, normalized for robustness. XGBoost selected the top 32 features. The models classified scoliosis into multiple groups based on curvature degree, measured through Cobb angle. To address the class imbalance, stratified sampling, undersampling, and oversampling techniques were used, with 10-fold stratified K-fold cross-validation for generalization. An automatic grid search was used for hyperparameter optimization, with K-fold cross-validation (K=3). The top-performing model was Random Forest, achieving an ROC AUC of 91.8%. An accuracy of 86.1%, precision of 86.0%, recall of 86.0%, and an F1 score of 85.1% were also achieved. Of the three techniques used to address class imbalance, stratified sampling produced the best out-of-sample results. SHAP values were generated for the top 20 features, including spine curve length and linear regression error, with the most predictive features ranked at the top, enhancing model explainability. Feature engineering with classical ML methods offers an effective approach for classifying scoliosis severity based on Cobb angle ranges. The high interpretability of features in representing spinal pathology, along with the ease of use of classical ML techniques, makes this an attractive solution for developing automated tools to manage complex spinal measurements.

The Chest X- Ray: The Ship has Sailed, But Has It?

Iacovino JR

pubmed logopapersJul 1 2025
In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?
Page 28 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.