Sort by:
Page 23 of 6046038 results

Broomand Lomer N, Ahmadzadeh AM, Ashoobi MA, Abdi S, Ghasemi A, Gholamrezanezhad A

pubmed logopapersOct 16 2025
Computed tomography (CT) can evaluate thyroid cancer invasion into adjacent structures and is useful in identifying incidental thyroid nodules. Computer-aided diagnostic approaches may provide valuable clinical advantages in this domain. Here, we aim to evaluate the diagnostic performance of radiomics and deep-learning methods using CT imaging for preoperative nodule classification. A comprehensive search of PubMed, Embase, Scopus, and Web of Science was conducted from inception to June 2, 2025. Study quality was assessed using QUADAS-2 and METRICS. A bivariate meta-analysis estimated pooled sensitivity, specificity, positive and negative likelihood ratios (PLR and NLR), diagnostic odds ratio (DOR), and area under the curve (AUC). Two supplementary analyses compared AI model performance with radiologists and assessed diagnostic utility across CT imaging phases (plain, venous, arterial). Subgroup and sensitivity analyses explored sources of heterogeneity. Publication bias was evaluated using Deek's funnel plot. The meta-analysis included 12 radiomics studies (sensitivity: 0.85, specificity: 0.83, PLR: 4.60, NLR: 0.19, DOR: 30.29, AUC: 0.894) and five deep-learning studies (sensitivity: 0.87, specificity: 0.93, PLR: 14.04, NLR: 0.15, DOR: 95.76, AUC: 0.911). Radiomics models showed low heterogeneity, while deep-learning models showed substantial heterogeneity, potentially due to differences in validation, segmentation, METRICS quality, and reference standards. Overall, these models outperformed radiologists, and models using plain CT images outperformed those based on arterial or venous phases. Radiomics and deep-learning models have demonstrated promising performance in classifying thyroid nodules and may improve radiologists' accuracy in indeterminate cases, while reducing unnecessary biopsies.

Sributsayakarn N, Intharah T, Hirunchavarod N, Pornprasertsuk-Damrongsri S, Jirarattanasopha V

pubmed logopapersOct 16 2025
Age and sex estimation, which is crucial in forensic odontology, traditionally relies on complex, time-consuming methods prone to human error. This study proposes an AI-driven approach using deep learning to estimate age and sex from panoramic radiographs of Thai children and adolescents. This study analyzed 4627 images from 2491 panoramic radiographs of Thai individuals aged 7 to 23 years. A supervised multitask model, built upon the EfficientNetB0 architecture, was developed to simultaneously estimate age and classify sex. The model was trained using a 2-phase process of transfer learning and fine-tuning. Following the development of an initial baseline model for the entire 7 to 23-year cohort, 2 primary age-stratified models (7-14 and 15-23 years) were subsequently developed to enhance predictive accuracy. All models were validated against the subjects' chronological age and biological sex. The age estimation model for individuals aged 7 to 23 years yielded a root mean square error (RMSE) of 1.67 and mean absolute error (MAE) of 1.15, with 71.0% accuracy in predicting dental-chronological age differences within 1 year. Age-stratified analysis revealed that the model showed superior performance in the younger cohort (7-14 years), with RMSE of 0.95, MAE of 0.62, and accuracy of 90.3%. Performance declined substantially in the older age group (15-23 years), where RMSE, MAE, and accuracy values were 1.87, 1.41, and 63.8%, respectively. The sex recognition model achieved good overall performance for individuals aged 7 to 23 years (area under curve [AUC] = 0.94, accuracy = 87.8%, sensitivity = 89%, specificity = 87%). In contrast to age estimation, sex recognition performance improved notably in the older cohort (15-23 years): AUC of 0.99, 94.7% accuracy, 92% sensitivity, and 98% specificity. This novel AI-based age and sex identification model exhibited good performance metrics, suggesting the potential to serve as an alternative to traditional methods as a diagnostic tool for characterizing both living individuals, as well as deceased bodies.

Ma F, Yu F, Gu X, Zhang L, Lu Z, Zhang L, Mao H, Xiang N

pubmed logopapersOct 16 2025
Thyroid nodules (TNs) represent a prevalent clinical issue in endocrinology. The diagnostic process for malignant TNs typically involves a three-stage detection: the function detection, color ultrasound (CU) detection and biopsy. Early identification is crucial for effective management of malignant TNs. This study developed a multimodal network for classifying CU images and thyroid function (TF) test data. Specifically, the PubMedClIP model was employed to extract visual features from CU images, generating a 512-dimensional feature vector. This vector was subsequently concatenated with five indicators of TF tests, as well as gender and age information, to construct a comprehensive representation. The combined representation was then fed into a downstream ML classifier, where we evaluated seven models, including AdaBoost, Random Forest, and Logistic Regression. Among the seven ML models evaluated, the AdaBoost classifier demonstrated the highest overall performance, surpassing other classifiers in terms of area under the curve (AUC), F1, accuracy, and coordinate attention (CA) metrics. The incorporation of visual features extracted from CU images using PubMedCLIP further enhanced the model’s performance. Feature importance analysis revealed that laboratory indicators such as free thyroxine (FT4), free triiodothyronine (FT3), and clip_feature_184 were the most influential clinical variables. Additionally, the integration of PubMedCLIP significantly improved the model’s capacity to accurately classify data by leveraging both clinical and imaging information. The proposed PubMedCLIP-based multimodal framework, which jointly utilizes ultrasound imaging features and clinical laboratory data, demonstrated superior diagnostic performance in differentiating benign from malignant TNs. This approach offers a promising tool for individualized risk assessment and clinical decision support, potentially facilitating more precise and personalized protocols for patients with TNs. Not applicable.

Zuo J, Feng F, Wang Z, Ashton-Miller JA, Delancey JOL, Luo J

pubmed logopapersOct 16 2025
Accurately outlining ("segmenting") pelvic organs from magnetic resonance imaging (MRI) scans is crucial for studying pelvic organ prolapse. The labor-intensive process of segmentation that identifies which pixels correspond to a particular organ in MRI datasets imposes a significant bottleneck on training AI to do automated segmentation techniques, underscoring a need for methods that can operate effectively with minimal pre-labeled data. The aim of this study is to introduce a novel semi-supervised learning process that uses limited data annotation in pelvic MRI to improve automated segmentation. By effectively using both labeled and unlabeled MRI data, our approach seeks to improve the accuracy and efficiency of pelvic organ segmentation, thereby reducing the reliance on extensive labeled datasets for AI model training. The study used a semi-supervised deep learning framework for uterus and bladder segmentation, in which a model is trained using both a small number of expert-outlined structures and a large number of unlabeled scans, leveraging the information from the labeled data to guide the model and improve its predictions on the unlabeled data. It involved 4,103 MR images from 48 female subjects. This approach included self-supervised learning of image restoration tasks for feature extraction and pseudo-label generation, followed by combined supervised learning on labeled images and unsupervised training on unlabeled images. The method's performance was evaluated quantitatively using the Dice Similarity Coefficient (DSC), Average Surface Distance (ASD), and 95% Hausdorff Distance (HD95). For statistical analysis, two-tailed paired t-tests were conducted for comparison. This framework demonstrated the capacity to achieve segmentation accuracy comparable to traditional methods while requiring only about 60% of the typically necessary labeled data. Specifically, the semi-supervised approach achieved DSCs of 0.84±0.04, ASDs of 13.98±0.93, HD95s of 2.15±0.40 for the uterus, and 0.92±0.05, 2.51±0.83, 2.88±0.17 for the bladder respectively (P-value<0.001 for all), outperforming both the baseline supervised learning and transfer learning models. Additionally, 3D reconstructions using the semi-supervised method exhibited superior details in the visualized organs. This study's semi-supervised learning framework wherein the full use of unlabeled data markedly reduces the necessity for extensive manual annotations, achieving high segmentation accuracy with substantially fewer labeled images that can enhance clinical evaluation and advance medical image analysis by reducing the dependency on large-scale labeled pelvic MRI datasets for training.

Ana Lawry Aguila, Dina Zemlyanker, You Cheng, Sudeshna Das, Daniel C. Alexander, Oula Puonti, Annabel Sorby-Adams, W. Taylor Kimberly, Juan Eugenio Iglesias

arxiv logopreprintOct 16 2025
Diffusion models have recently emerged as powerful generative models in medical imaging. However, it remains a major challenge to combine these data-driven models with domain knowledge to guide brain imaging problems. In neuroimaging, Bayesian inverse problems have long provided a successful framework for inference tasks, where incorporating domain knowledge of the imaging process enables robust performance without requiring extensive training data. However, the anatomical modeling component of these approaches typically relies on classical mathematical priors that often fail to capture the complex structure of brain anatomy. In this work, we present the first general-purpose application of diffusion models as priors for solving a wide range of medical imaging inverse problems. Our approach leverages a score-based diffusion prior trained extensively on diverse brain MRI data, paired with flexible forward models that capture common image processing tasks such as super-resolution, bias field correction, inpainting, and combinations thereof. We further demonstrate how our framework can refine outputs from existing deep learning methods to improve anatomical fidelity. Experiments on heterogeneous clinical and research MRI data show that our method achieves state-of-the-art performance producing consistent, high-quality solutions without requiring paired training datasets. These results highlight the potential of diffusion priors as versatile tools for brain MRI analysis.

Breathnach, C. L., Harney, F., Townley, D., Hickey, R., Simpkin, A., O'Keeffe, D.

medrxiv logopreprintOct 16 2025
BackgroundDiabetic macular oedema (DME) is a vision-threatening complication of diabetes mellitus. It is reliably detected using optical coherence tomography (OCT). This work evaluates a deep learning system (DLS) for the automated detection and classification of DME severity from OCT images. MethodsAnonymised OCT images were retrospectively obtained from 950 patients at University Hospital Galway, Ireland. Images were graded by a consultant ophthalmologist to classify the level of DME present (normal, non-centre-involving DME, centre-involving DME) excluding other pathologies. A DLS was trained using cross-validation, then evaluated on a test dataset and an external dataset. The test set was graded by a second ophthalmologist for comparison. ResultsIn detecting the presence of DME, the DLS achieved a mean area under the receiver operating characteristic curve (AUC) of 0.98 on cross-validation. AUCs of 0.94 (95% CI 0.90-0.98) and 0.94 (0.92-0.96) were achieved on evaluation of DME detection for the test dataset when graded by the first and second ophthalmologist respectively. An AUC of 0.94 (0.92-0.96) was achieved on evaluation with the external dataset. When detecting the DME severity, AUCs of 0.98, 0.86 and 0.99 were achieved per class on cross validation. For the test dataset, AUCs of 0.99, 0.89 and 0.98 were achieved when graded by the first ophthalmologist and AUCs of 0.96, 0.89 and 0.95 were achieved when graded by the second ophthalmologist. ConclusionThis study suggests promising results for the use of deep learning in the classification of severity of DME which could be used to automate screening for DME and direct appropriate referrals.

Pires JG

pubmed logopapersOct 15 2025
Artificial intelligence (AI) has evolved through various trends, with different subfields gaining prominence over time. Currently, conversational AI-particularly generative AI-is at the forefront. Conversational AI models are primarily focused on text-based tasks and are commonly deployed as chatbots. Recent advancements by OpenAI have enabled the integration of external, independently developed models, allowing chatbots to perform specialized, task-oriented functions beyond general language processing. This study aims to develop a smart chatbot that integrates large language models from OpenAI with specialized domain-specific models, such as those used in medical image diagnostics. The system leverages transfer learning via Google's Teachable Machine to construct image-based classifiers and incorporates a diabetes detection model developed in TensorFlow.js. A key innovation is the chatbot's ability to extract relevant parameters from user input, trigger the appropriate diagnostic model, interpret the output, and deliver responses in natural language. The overarching goal is to demonstrate the potential of combining large language models with external models to build multimodal, task-oriented conversational agents. Two image-based models were developed and integrated into the chatbot system. The first analyzes chest X-rays to detect viral and bacterial pneumonia. The second uses optical coherence tomography images to identify ocular conditions such as drusen, choroidal neovascularization, and diabetic macular edema. Both models were incorporated into the chatbot to enable image-based medical query handling. In addition, a text-based model was constructed to process physiological measurements for diabetes prediction using TensorFlow.js. The architecture is modular; new diagnostic models can be added without redesigning the chatbot, enabling straightforward functional expansion. The findings demonstrate effective integration between the chatbot and the diagnostic models, with only minor deviations from expected behavior. Additionally, a stub function was implemented within the chatbot to schedule medical appointments based on the severity of a patient's condition, and it was specifically tested with the optical coherence tomography and X-ray models. This study demonstrates the feasibility of developing advanced AI systems-including image-based diagnostic models and chatbot integration-by leveraging AI as a service. It also underscores the potential of AI to enhance user experiences in bioinformatics, paving the way for more intuitive and accessible interfaces in the field. Looking ahead, the modular nature of the chatbot allows for the integration of additional diagnostic models as the system evolves.

Zhao X, Xiao D, Zhang T, Shao L, Ai D, Fan J, Fu T, Lin Y, Song H, Wang J, Yang J

pubmed logopapersOct 15 2025
Pelvic fracture reduction planning is clinically critical yet technically demanding due to the complex anatomical structure of pelvis and the topological discontinuities introduced by fractures. Existing computer-assisted planning approaches dominantly rely on shape-based models, overlooking the rich CT intensity information that is essential for accurate and patient-specific planning. To address this limitation, we propose SIRDiff, a novel framework that incorporates anatomical shape and CT intensity information to generate biomechanically plausible reference models for pelvic fracture reduction planning. SIRDiff comprises three key components: 1) the structure-aware diffusion model to reconstruct the global anatomical structure, 2) the topology-adaptive structural conditioning strategy that maps fracture landmarks into a healthy anatomical graph domain for robust structure guidance, and 3) the detail-preserved autoencoder to ensure the fine-grained image reconstruction from latent representations. Additionally, SIRDiff adopts a multi-task learning approach to jointly predict the reference CT image and corresponding bone segmentation map, which enhances its potential for clinical application and ensures better anatomical consistency. Despite being trained exclusively on synthetic fracture data, SIRDiff shows the strong generalizability to real clinical cases and consistently outperforms existing methods across multiple clinically relevant evaluation metrics, demonstrating its potential as a robust and deployable solution for pelvic fracture reduction planning.

Zhang Z, Zhang H, Zheng S, Pan L, Yang M, Huang L, Wu Q, Zhang Z, Shan F, Zhuang X, Yu M

pubmed logopapersOct 15 2025
Low-dose computed tomography (LDCT) reduces health risks from radiation exposure but introduces imaging noise and artifacts. While numerous studies have employed deep learning for LDCT image denoising, the field continues to face significant challenges. Recent advancements have seen diffusion models applied to overcome issues of over-smoothness and unstable training inherent in prior deep learning approaches. However, the diffusion models face challenges in direct practical applications due to the extensive sampling steps, significant inference time required, and the need for hard-to-obtain paired data during training. To address these difficulties, this paper introduces a self-supervised diffusion model with edge prior for unpaired LDCT denoising. This method enables denoising within a lower-dimensional space, reducing computational complexity. Our proposed approach enhances denoised image clarity by applying prior edge constraints to compressed encodings; it employs a noise-conditioned encoding strategy to facilitate self-supervised image training, enabling the method to be applicable to unpaired CT data; and it utilizes compressed LDCT encoding as intermediate sampling results during the inference process, thereby accelerating sampling and reducing the time required for inference, making the method more real-time capable. Extensive validation across multiple datasets demonstrates that our method achieves competitive performance against state-of-the-art approaches in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and perceptual quality (LPIPS), while maintaining a practically acceptable inference time.

Nematollahi H, Alikhani F, Shahbazi-Gahrouei D, Moslehi M, Farzadniya A, Shamsinejadbabaki P

pubmed logopapersOct 15 2025
This study aimed to propose a deep learning-based segmentation framework to delineate prostate lesions across multiple MRI acquisitions and derived parametric maps, including apparent diffusion coefficient (ADC) map, diffusion kurtosis imaging (DKI)-derived parameter maps (D map and K map), T2-weighted imaging (T2WI), and T2*-weighted imaging-derived parameter maps (T2* map and R2* map). Then, a comparison was conducted among the model's segmentation performance across MRI-derived images to identify those that provide the most discriminative information for accurate lesion identification. 51 patients underwent multiparametric MRI sequences, which included T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and T2*-weighted images. Three expert radiologists conducted manual lesion annotations. All images were preprocessed, labeled, and augmented before training the U-Net++ model. The segmentation model's performance was evaluated using Dice similarity coefficient, Intersection over Union (IoU), sensitivity, and specificity metrics. The IoU values for the ADC map, D map, K map, T2WI, T2* map, and R2* map were 0.8907, 0.8559, 0.9504, 0.9250, 0.9441, and 0.8781, respectively. The corresponding Dice coefficient scores were 0.9416, 0.9211, 0.9744, 0.9604, 0.9709, and 0.9342. These results indicate a significant degree of overlap between the predicted and ground truth segmentation masks. These findings emphasize the complementary value of combining optimized deep learning architectures with advanced MRI-derived images, which could enhance diagnostic precision and facilitate more informed clinical decision-making.
Page 23 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.