Sort by:
Page 134 of 3473463 results

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

MTMedFormer: multi-task vision transformer for medical imaging with federated learning.

Nath A, Shukla S, Gupta P

pubmed logopapersJul 8 2025
Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers' ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation.

Deep learning 3D super-resolution radiomics model based on Gd-enhanced MRI for improving preoperative prediction of HCC pathological grading.

Jia F, Wu B, Wang Z, Jiang J, Liu J, Liu Y, Zhou Y, Zhao X, Yang W, Xiong Y, Jiang Y, Zhang J

pubmed logopapersJul 8 2025
The histological grade of hepatocellular carcinoma (HCC) is an important factor associated with early tumor recurrence and prognosis after surgery. Developing a valuable tool to assess this grade is essential for treatment. This study aimed to evaluate the feasibility and efficacy of a deep learning-based three-dimensional super-resolution (SR) magnetic resonance imaging radiomics model for predicting the pathological grade of HCC. A total of 197 HCC patients were included and divided into a training cohort (n = 157) and a testing cohort (n = 40). Three-dimensional SR technology based on deep learning was used to obtain SR hepatobiliary phase (HBP) images from normal-resolution (NR) HBP images. High-dimensional quantitative features were extracted from manually segmented volumes of interest in NRHBP and SRHBP images. The gradient boosting, light gradient boosting machine, and support vector machine were used to develop three-class (well-differentiated vs. moderately differentiated vs. poorly differentiated) and binary radiomics (well-differentiated vs. moderately and poorly differentiated) models, and the predictive performance of these models was evaluated using several measures. All the three-class models using SRHBP images had higher area under the curve (AUC) values than those using NRHBP images. The binary classification models developed with SRHBP images also outperformed those with NRHBP images in distinguishing moderately and poorly differentiated HCC from well-differentiated HCC (AUC = 0.849, sensitivity = 77.8%, specificity = 76.9%, accuracy = 77.5% vs. AUC = 0.603, sensitivity = 48.1%, specificity = 76.9%, accuracy = 57.5%; p = 0.039). Decision curve analysis revealed the clinical value of the models. Deep learning-based three-dimensional SR technology may improve the performance of radiomics models using HBP images for predicting the preoperative pathological grade of HCC.

A fully automated deep learning framework for age estimation in adults using periapical radiographs of canine teeth.

Upalananda W, Phisutphithayakun C, Assawasuksant P, Tanwattana P, Prasatkaew P

pubmed logopapersJul 8 2025
Determining age from dental remains is vital in forensic investigations, aiding in victim identification and anthropological research. Our framework uses a two-step pipeline: tooth detection followed by age estimation, based on either canine tooth images alone or combined with sex information. The dataset included 2,587 radiographs from 1,004 patients (691 females, 313 males) aged 13.42-85.45 years. The YOLOv8-Nano model achieved exceptional performance in detecting canine teeth, with an F1 score of 0.994, a 98.94% detection success rate, and accurate numbering of all detected teeth. For age estimation, we implemented four convolutional neural network architectures: ResNet-18, DenseNet-121, EfficientNet-B0, and MobileNetV3. Each model was trained to estimate age based on one of the four individual canine teeth (13, 23, 33, and 43). The models achieved median absolute errors ranging from 3.55 to 5.18 years. Incorporating sex as an additional input feature did not improve performance. Moreover, no significant differences in predictive accuracy were observed among the individual teeth. In conclusion, the proposed framework demonstrates potential as a robust and practical tool for forensic age estimation across diverse forensic contexts.

A novel UNet-SegNet and vision transformer architectures for efficient segmentation and classification in medical imaging.

Tongbram S, Shimray BA, Singh LS

pubmed logopapersJul 8 2025
Medical imaging has become an essential tool in the diagnosis and treatment of various diseases, and provides critical insights through ultrasound, MRI, and X-ray modalities. Despite its importance, challenges remain in the accurate segmentation and classification of complex structures owing to factors such as low contrast, noise, and irregular anatomical shapes. This study addresses these challenges by proposing a novel hybrid deep learning model that integrates the strengths of Convolutional Autoencoders (CAE), UNet, and SegNet architectures. In the preprocessing phase, a Convolutional Autoencoder is used to effectively reduce noise while preserving essential image details, ensuring that the images used for segmentation and classification are of high quality. The ability of CAE to denoise images while retaining critical features enhances the accuracy of the subsequent analysis. The developed model employs UNet for multiscale feature extraction and SegNet for precise boundary reconstruction, with Dynamic Feature Fusion integrated at each skip connection to dynamically weight and combine the feature maps from the encoder and decoder. This ensures that both global and local features are effectively captured, while emphasizing the critical regions for segmentation. To further enhance the model's performance, the Hybrid Emperor Penguin Optimizer (HEPO) was employed for feature selection, while the Hybrid Vision Transformer with Convolutional Embedding (HyViT-CE) was used for the classification task. This hybrid approach allows the model to maintain high accuracy across different medical imaging tasks. The model was evaluated using three major datasets: brain tumor MRI, breast ultrasound, and chest X-rays. The results demonstrate exceptional performance, achieving an accuracy of 99.92% for brain tumor segmentation, 99.67% for breast cancer detection, and 99.93% for chest X-ray classification. These outcomes highlight the ability of the model to deliver reliable and accurate diagnostics across various medical contexts, underscoring its potential as a valuable tool in clinical settings. The findings of this study will contribute to advancing deep learning applications in medical imaging, addressing existing research gaps, and offering a robust solution for improved patient care.

AI lesion tracking in PET/CT imaging: a proposal for a Siamese-based CNN pipeline applied to PSMA PET/CT scans.

Hein SP, Schultheiss M, Gafita A, Zaum R, Yagubbayli F, Tauber R, Rauscher I, Eiber M, Pfeiffer F, Weber WA

pubmed logopapersJul 8 2025
Assessing tumor response to systemic therapies is one of the main applications of PET/CT. Routinely, only a small subset of index lesions out of multiple lesions is analyzed. However, this operator dependent selection may bias the results due to possible significant inter-metastatic heterogeneity of response to therapy. Automated, AI-based approaches for lesion tracking hold promise in enabling the analysis of many more lesions and thus providing a better assessment of tumor response. This work introduces a Siamese CNN approach for lesion tracking between PET/CT scans. Our approach is applied on the laborious task of tracking a high number of bone lesions in full-body baseline and follow-up [<sup>68</sup>Ga]Ga- or [<sup>18</sup>F]F-PSMA PET/CT scans after two cycles of [<sup>177</sup>Lu]Lu-PSMA therapy of metastatic castration resistant prostate cancer patients. Data preparation includes lesion segmentation and affine registration. Our algorithm extracts suitable lesion patches and forwards them into a Siamese CNN trained to classify the lesion patch pairs as corresponding or non-corresponding lesions. Experiments have been performed with different input patch types and a Siamese network in 2D and 3D. The CNN model successfully learned to classify lesion assignments, reaching an accuracy of 83 % in its best configuration with an AUC = 0.91. For corresponding lesions the pipeline accomplished lesion tracking accuracy of even 89 %. We proved that a CNN may facilitate the tracking of multiple lesions in PSMA PET/CT scans. Future clinical studies are necessary if this improves the prediction of the outcome of therapies.

Progress in fully automated abdominal CT interpretation-an update over the past decade.

Batheja V, Summers R

pubmed logopapersJul 8 2025
This article reviews advancements in fully automated abdominal CT interpretation over the past decade, with a focus on automated image analysis techniques such as quantitative analysis, computer-aided detection, and disease classification. For each abdominal organ, we review segmentation techniques, assess clinical applications and performance, and explore methods for detecting/classifying associated pathologies. We also highlight cutting-edge AI developments, including foundation models, large language models, and multimodal image analysis. While challenges remain in integrating AI into radiology practice, recent progress underscores its growing potential to streamline workflows, reduce radiologist burnout, and enhance patient care.

A Meta-Analysis of the Diagnosis of Condylar and Mandibular Fractures Based on 3-dimensional Imaging and Artificial Intelligence.

Wang F, Jia X, Meiling Z, Oscandar F, Ghani HA, Omar M, Li S, Sha L, Zhen J, Yuan Y, Zhao B, Abdullah JY

pubmed logopapersJul 8 2025
This article aims to review the literature, study the current situation of using 3D images and artificial intelligence-assisted methods to improve the rapid and accurate classification and diagnosis of condylar fractures and conduct a meta-analysis of mandibular fractures. Mandibular condyle fracture is a common fracture type in maxillofacial surgery. Accurate classification and diagnosis of condylar fractures are critical to developing an effective treatment plan. With the rapid development of 3-dimensional imaging technology and artificial intelligence (AI), traditional x-ray diagnosis is gradually replaced by more accurate technologies such as 3-dimensional computed tomography (CT). These emerging technologies provide more detailed anatomic information and significantly improve the accuracy and efficiency of condylar fracture diagnosis, especially in the evaluation and surgical planning of complex fractures. The application of artificial intelligence in medical imaging is further analyzed, especially the successful cases of fracture detection and classification through deep learning models. Although AI technology has demonstrated great potential in condylar fracture diagnosis, it still faces challenges such as data quality, model interpretability, and clinical validation. This article evaluates the accuracy and practicality of AI in diagnosing mandibular fractures through a systematic review and meta-analysis of the existing literature. The results show that AI-assisted diagnosis has high prediction accuracy in detecting condylar fractures and significantly improves diagnostic efficiency. However, more multicenter studies are still needed to verify the application of AI in different clinical settings to promote its widespread application in maxillofacial surgery.

Foundation models for radiology: fundamentals, applications, opportunities, challenges, risks, and prospects.

Akinci D'Antonoli T, Bluethgen C, Cuocolo R, Klontzas ME, Ponsiglione A, Kocak B

pubmed logopapersJul 8 2025
Foundation models (FMs) represent a significant evolution in artificial intelligence (AI), impacting diverse fields. Within radiology, this evolution offers greater adaptability, multimodal integration, and improved generalizability compared with traditional narrow AI. Utilizing large-scale pre-training and efficient fine-tuning, FMs can support diverse applications, including image interpretation, report generation, integrative diagnostics combining imaging with clinical/laboratory data, and synthetic data creation, holding significant promise for advancements in precision medicine. However, clinical translation of FMs faces several substantial challenges. Key concerns include the inherent opacity of model decision-making processes, environmental and social sustainability issues, risks to data privacy, complex ethical considerations, such as bias and fairness, and navigating the uncertainty of regulatory frameworks. Moreover, rigorous validation is essential to address inherent stochasticity and the risk of hallucination. This international collaborative effort provides a comprehensive overview of the fundamentals, applications, opportunities, challenges, and prospects of FMs, aiming to guide their responsible and effective adoption in radiology and healthcare.

Fast MR signal simulations of microvascular and diffusion contributions using histogram-based approximation and recurrent neural networks.

Coudert T, Silva Martins Marçal M, Delphin A, Barrier A, Cunge L, Legris L, Warnking JM, Lemasson B, Barbier EL, Christen T

pubmed logopapersJul 8 2025
Accurate MR signal simulation, including microvascular structures and water diffusion, is crucial for MRI techniques like fMRI BOLD modeling and MR vascular Fingerprinting (MRF), which use susceptibility effects on MR signals for tissue characterization. However, integrating microvascular features and diffusion remains computationally challenging, limiting the accuracy of the estimates. Using advanced modeling and deep neural networks, we propose a novel simulation tool that efficiently accounts for susceptibility and diffusion effects. We used dimension reduction of magnetic field inhomogeneity matrices combined with deep learning methodology to accelerate the simulations while maintaining their accuracy. We validated our results through an in silico study against a reference method and in vivo MRF experiments. This approach accelerates MR signal generation by a factor of almost 13 000 compared to previously used simulation methods while preserving accuracy. The MR-WAVES method allows fast generation of MR signals accounting for microvascular structures and water-diffusion contribution.
Page 134 of 3473463 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.