Sort by:
Page 150 of 7327315 results

Nagawa K, Hara Y, Kakemoto S, Shiratori T, Kaizu A, Koyama M, Tsuchihashi S, Shimizu H, Inoue K, Sugita N, Kozawa E

pubmed logopapersOct 16 2025
We evaluated the effectiveness of magnetic resonance imaging (MRI)-based subregional texture analysis (TA) models for classifying knee osteoarthritis (OA) severity grades by compartment. We identified 122 MR images of 121 patients with knee OA (mild-to-severe OA equivalent to Kellgren-Lawrence grades 2-4), comprising sagittal proton density-weighted imaging and axial fat-suppressed proton density-weighted imaging. The data were divided into OA severity groups by medial, lateral, and articulation between the patella and femoral trochlea (P-FT) compartments (three groups for the medial compartment and two for the lateral and P-FT compartments). After extracting 93 texture features and dimension reduction for each compartment and imaging, models were created using linear discriminant analysis, support vector machine with linear, radial basis function, sigmoid kernels, and random forest classifiers. Models underwent 100-time repeat nested cross validations. We applied our classification model to total knee OA severity. The models' performance was modest for both compartments and total knee. The medial compartment showed better results than the lateral and patellofemoral compartments. Our MRI-based compartmental TA model can potentially differentiate between subregional OA severity grades. Further studies are needed to assess the feasibility of our subregional TA method and machine learning algorithms for classifying OA severity by compartment.

Okila N, Katumba A, Nakatumba-Nabende J, Murindanyi S, Mwikirize C, Serugunda J, Bugeza S, Oriekot A, Bossa J, Nabawanuka E

pubmed logopapersOct 16 2025
Lung ultrasound (LUS) vertical artifacts are critical sonographic markers commonly used in evaluating pulmonary conditions such as pulmonary edema, interstitial lung disease, pneumonia, and COVID-19. Accurate detection and localization of these artifacts are vital for informed clinical decision-making. However, interpreting LUS images remains highly operator-dependent, leading to variability in diagnosis. While deep learning (DL) models offer promising potential to automate LUS interpretation, their development is limited by the scarcity of annotated datasets specifically focused on vertical artifacts. This study introduces a curated dataset of 401 high-resolution LUS images, each annotated with polygonal bounding boxes to indicate vertical artifact locations. The images were collected from 152 patients with pulmonary conditions at Mulago and Kiruddu National Referral Hospitals in Uganda. This dataset serves as a valuable resource for training and evaluating DL models designed to accurately detect and localize LUS vertical artifacts, contributing to the advancement of AI-driven diagnostic tools for early detection and monitoring of respiratory diseases.

He K, Hohenberg J, Li Y, Xiao A, Cho H, Nagel E, Ramel S, Bell KA, Wei D, Park J, Ranger BJ

pubmed logopapersOct 16 2025
This study investigates the feasibility of deep learning to predict body composition with ultrasound, specifically fat mass (FM) and fat-free mass (FFM), to improve newborn health assessments. We analyzed 721 ultrasound images of the biceps, quadriceps and abdomen from 65 pre-term infants. A deep learning model incorporating a modified U-Net architecture was developed to predict FM and FFM using air displacement plethysmography as ground truth labels for training. Model performance was assessed using mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE) and mean absolute percentage error (MAPE), along with Bland-Altman plots to evaluate mean bias and limits of agreement. We tested different image combinations to determine the contribution of anatomical regions. Grad-CAM was applied to identify image regions with the strongest influence on predictions. Combining biceps, quadriceps and abdominal ultrasound images to predict whole-body composition showed strong agreement with ground truth values, with low MAE (FM: 0.0145 kg, FFM: 0.0794 kg), MSE (FM: 0.0003 kg<sup>2</sup>, FFM: 0.0073 kg<sup>2</sup>), RMSE (FM: 0.0183 kg, FFM: 0.0854 kg) and MAPE (FM: 2.65%, FFM: 8.40%). Using only abdominal images for prediction improved FFM performance (MAPE = 4.62%, MSE = 0.0041 kg<sup>2</sup>, RMSE = 0.0486 kg, MAE = 0.0378 kg). Grad-CAM revealed muscle regions as key contributors to FM and FFM predictions. Deep learning provides a promising approach to predicting body composition with ultrasound and could be a valuable tool for assessing nutritional status in neonatal care.

Koehler D, Shenas F, Sauer M, Apostolova I, Budäus L, Falkenbach F, Maurer T

pubmed logopapersOct 16 2025
Standardized prostate-specific membrane antigen (PSMA) PET/CT evaluation and reporting was introduced to aid interpretation, reproducibility, and communication. Artificial intelligence may enhance these efforts. This study aimed to evaluate the performance of aPROMISE, a deep learning segmentation and reporting software for PSMA PET/CT, compared with a standard image viewer (IntelliSpace Portal [ISP]) in patients undergoing PSMA-radioguided surgery. This allowed the correlation of target lesions with histopathology as a standard of truth. <b>Methods:</b> [<sup>68</sup>Ga]Ga-PSMA-I&T PET/CT of 96 patients with biochemical persistence or recurrence after prostatectomy (median prostate-specific antigen, 0.56 ng/mL; interquartile range, 0.31-1.24 ng/mL), who underwent PSMA-radioguided surgery, were retrospectively analyzed (twice with ISP and twice with aPROMISE) by 2 readers. Cohen κ with 95% CI was calculated to assess intra- and interrater agreement for miTNM stages. Differences between miTNM codelines were classified as no difference, minor difference (change of lymph node region without N/M change), and major difference (miTNM change). <b>Results:</b> Intrarater agreement rates were high for all categories, both readers, and systems (≥91.7%) with moderate to almost perfect κ values (reader 1, ISP, ≥0.51; range, 0.21-0.9; aPROMISE, ≥0.64; range, 0.41-0.99; reader 2, ISP, ≥0.83; range, 0.69-1; aPROMISE, ≥0.78; range, 0.63-1). Major differences occurred more frequently for reader 1 than for reader 2 (ISP, 26% vs. 13.5%; aPROMISE, 22.9% vs. 12.5%). Interrater agreement rates were high with both systems (≥92.2%), demonstrating substantial κ values (ISP, ≥0.73; range, 0.47-0.99; aPROMISE, ≥0.74; range, 0.54-1) with major miTNM staging differences in 21 (21.9%) cases. Readers identified 140 lesions by consensus, of which aPROMISE automatically segmented 129 (92.1%) lesions. Unsegmented lesions either were adjacent to high urine activity or demonstrated low PSMA expression. Agreement rates between imaging and histopathology were substantial (≥86.5%), corresponding to moderate to substantial κ values (≥0.6; range, 0.45-1) with major staging differences in 33 (34.4%) patients. This included 13 (13.5%) cases with metastases distant from targets identified on imaging. One of these lesions was automatically segmented by aPROMISE. <b>Conclusion:</b> Intra- and interreader agreement for PSMA PET/CT evaluation were similarly high with ISP and aPROMISE. The algorithm segmented 92.1% of all identified lesions. Software applications with artificial intelligence could be applied as support tools in PSMA PET/CT evaluation of early prostate cancer.

Guo W, Lin L, Wu Y, Lin X, Yang G, Song Y, Chen D

pubmed logopapersOct 16 2025
Our aim was to investigate the potential of using MRI-based habitat features for predicting progression-free survival (PFS) in patients with lung cancer brain metastasis (LCBM) receiving radiotherapy. One hundred and forty-six lesions from 68 patients with LCBM receiving radiotherapy were retrospectively reviewed and divided into training, random test (R-test), and time-independent test (TI-test) cohorts. Conventional radiomics and habitat features were extracted from the whole-tumor area and tumor subregions, respectively. Different machine learning risk models for predicting PFS were developed on the basis of clinical, radiomics, and habitat features, and their combination (clinical + habitat), respectively. The performance of the risk models was evaluated using the concordance index (C-Index) and Brier scores. The Kaplan-Meier curve was used to assess the prognostic value of the models. The habitat risk model achieved the best prediction ability among 4 different risk models in the TI-test cohort (C-Index: 0.716; 95% CI, 0.548-0.890). Additionally, the habitat and radiomics risk models outperformed the clinical risk model in the training (C-Index: 0.721-0.762 versus 0.697) and TI-test cohorts (C-Index: 0.630-0.716 versus 0.377). A habitat risk model based on intratumoral heterogeneity could be a reliable biomarker for predicting PFS in patients with LCBM receiving radiotherapy.

Kim JG, Ha SY, Kang YR, Hong H, Kim D, Lee M, Sunwoo L, Ryu WS, Kim JT

pubmed logopapersOct 16 2025
To evaluate the stand-alone efficacy and improvements in diagnostic accuracy of early-career physicians of the artificial intelligence (AI) software to detect large vessel occlusion (LVO) in CT angiography (CTA). This multicenter study included 595 ischemic stroke patients from January 2021 to September 2023. Standard references and LVO locations were determined by consensus among three experts. The efficacy of the AI software was benchmarked against standard references, and its impact on the diagnostic accuracy of four residents involved in stroke care was assessed. The area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of the software and readers with versus without AI assistance were calculated. Among the 595 patients (mean age 68.5±13.4 years, 56% male), 275 (46.2%) had LVO. The median time interval from the last known well time to the CTA was 46.0 hours (IQR 11.8-64.4). For LVO detection, the software demonstrated a sensitivity of 0.858 (95% CI 0.811 to 0.897) and a specificity of 0.969 (95% CI 0.943 to 0.985). In subjects whose symptom onset to imaging was within 24 hours (n=195), the software exhibited an AUROC of 0.973 (95% CI 0.939 to 0.991), a sensitivity of 0.890 (95% CI 0.817 to 0.936), and a specificity of 0.965 (95% CI 0.902 to 0.991). Reading with AI assistance improved sensitivity by 4.0% (2.17 to 5.84%) and AUROC by 0.024 (0.015 to 0.033) (all P<0.001) compared with readings without AI assistance. The AI software demonstrated a high detection rate for LVO. In addition, the software improved diagnostic accuracy of early-career physicians in detecting LVO, streamlining stroke workflow in the emergency room.

Heo JU, Sun S, Jones RS, Gu Y, Jiang Y, Qian P, Baydoun A, Arsenault TH, Traughber M, Helo RA, Thompson C, Yao M, Dorth J, Nakayama J, Waggoner SE, Biswas T, Harris EE, Sandstrom KS, Traughber B, Muizc RJF

pubmed logopapersOct 16 2025
Positron Emission Tomography/Magnetic Resonance (PET/MR) offers benefits over PET/CT including simultaneous PET and MR acquisition, intrinsic spatial registration accuracy, MR-based functional information, and superior soft tissue contrast. However, accurate attenuation correction (AC) for PET remains challenging as MR signals do not directly correspond to attenuation. Using deep learning algorithms that learn complex relationships, we generate synthetic CT (sCT) from MR for AC. Our novel method for AC, merges deep learning with threshold-based segmentation, to produce an AC map for the entire torso from Dixon MR images, which heretofore has not been demonstrated.&#xD;&#xD;Twenty-nine prospectively collected, paired FDG-PET/CT and MR datasets were used for training and validation using the U-net Residual Network conditional Generative Adversarial Network integrated with tissue segmentation (URcGANmod) from Dixon MR data. Our application focused on torso (base of the skull to mid-thigh) AC, a common but challenging field of view (FOV). Performance was compared to that of 4 previously published methods.&#xD;&#xD;Using 15 paired datasets for training and 14 independent datasets for testing, the URcGANmod generates an accurate torso sCT with a mean absolute difference of 32±4 HU per voxel. When applied for AC for FDG images, and considering evaluable (SUV ≥ 0.1 g/mL) voxels across all regions of interest, absolute values of the differences were within 4.4% from those determined using the measured CT for AC. Reproducibility was excellent with less than 3.5% standard deviation. The results demonstrate the accuracy and precision of URcGANmod method for torso sCT generation for quantitatively accurate MR-based AC (MRAC), exceeding the comparison methods.&#xD;&#xD;Combining deep learning and segmentation enhances MRAC accuracy in torso FDG-PET/MR, improves SUV accuracy throughout the torso, achieves less than 4.4% SUV error, and outperforms comparison methods. Given the excellent sCT and SUV accuracy and precision, our proposed method warrants further studies for quantitative longitudinal multicenter trials.&#xD.

Broomand Lomer N, Ahmadzadeh AM, Ashoobi MA, Abdi S, Ghasemi A, Gholamrezanezhad A

pubmed logopapersOct 16 2025
Computed tomography (CT) can evaluate thyroid cancer invasion into adjacent structures and is useful in identifying incidental thyroid nodules. Computer-aided diagnostic approaches may provide valuable clinical advantages in this domain. Here, we aim to evaluate the diagnostic performance of radiomics and deep-learning methods using CT imaging for preoperative nodule classification. A comprehensive search of PubMed, Embase, Scopus, and Web of Science was conducted from inception to June 2, 2025. Study quality was assessed using QUADAS-2 and METRICS. A bivariate meta-analysis estimated pooled sensitivity, specificity, positive and negative likelihood ratios (PLR and NLR), diagnostic odds ratio (DOR), and area under the curve (AUC). Two supplementary analyses compared AI model performance with radiologists and assessed diagnostic utility across CT imaging phases (plain, venous, arterial). Subgroup and sensitivity analyses explored sources of heterogeneity. Publication bias was evaluated using Deek's funnel plot. The meta-analysis included 12 radiomics studies (sensitivity: 0.85, specificity: 0.83, PLR: 4.60, NLR: 0.19, DOR: 30.29, AUC: 0.894) and five deep-learning studies (sensitivity: 0.87, specificity: 0.93, PLR: 14.04, NLR: 0.15, DOR: 95.76, AUC: 0.911). Radiomics models showed low heterogeneity, while deep-learning models showed substantial heterogeneity, potentially due to differences in validation, segmentation, METRICS quality, and reference standards. Overall, these models outperformed radiologists, and models using plain CT images outperformed those based on arterial or venous phases. Radiomics and deep-learning models have demonstrated promising performance in classifying thyroid nodules and may improve radiologists' accuracy in indeterminate cases, while reducing unnecessary biopsies.

Sributsayakarn N, Intharah T, Hirunchavarod N, Pornprasertsuk-Damrongsri S, Jirarattanasopha V

pubmed logopapersOct 16 2025
Age and sex estimation, which is crucial in forensic odontology, traditionally relies on complex, time-consuming methods prone to human error. This study proposes an AI-driven approach using deep learning to estimate age and sex from panoramic radiographs of Thai children and adolescents. This study analyzed 4627 images from 2491 panoramic radiographs of Thai individuals aged 7 to 23 years. A supervised multitask model, built upon the EfficientNetB0 architecture, was developed to simultaneously estimate age and classify sex. The model was trained using a 2-phase process of transfer learning and fine-tuning. Following the development of an initial baseline model for the entire 7 to 23-year cohort, 2 primary age-stratified models (7-14 and 15-23 years) were subsequently developed to enhance predictive accuracy. All models were validated against the subjects' chronological age and biological sex. The age estimation model for individuals aged 7 to 23 years yielded a root mean square error (RMSE) of 1.67 and mean absolute error (MAE) of 1.15, with 71.0% accuracy in predicting dental-chronological age differences within 1 year. Age-stratified analysis revealed that the model showed superior performance in the younger cohort (7-14 years), with RMSE of 0.95, MAE of 0.62, and accuracy of 90.3%. Performance declined substantially in the older age group (15-23 years), where RMSE, MAE, and accuracy values were 1.87, 1.41, and 63.8%, respectively. The sex recognition model achieved good overall performance for individuals aged 7 to 23 years (area under curve [AUC] = 0.94, accuracy = 87.8%, sensitivity = 89%, specificity = 87%). In contrast to age estimation, sex recognition performance improved notably in the older cohort (15-23 years): AUC of 0.99, 94.7% accuracy, 92% sensitivity, and 98% specificity. This novel AI-based age and sex identification model exhibited good performance metrics, suggesting the potential to serve as an alternative to traditional methods as a diagnostic tool for characterizing both living individuals, as well as deceased bodies.

Ma F, Yu F, Gu X, Zhang L, Lu Z, Zhang L, Mao H, Xiang N

pubmed logopapersOct 16 2025
Thyroid nodules (TNs) represent a prevalent clinical issue in endocrinology. The diagnostic process for malignant TNs typically involves a three-stage detection: the function detection, color ultrasound (CU) detection and biopsy. Early identification is crucial for effective management of malignant TNs. This study developed a multimodal network for classifying CU images and thyroid function (TF) test data. Specifically, the PubMedClIP model was employed to extract visual features from CU images, generating a 512-dimensional feature vector. This vector was subsequently concatenated with five indicators of TF tests, as well as gender and age information, to construct a comprehensive representation. The combined representation was then fed into a downstream ML classifier, where we evaluated seven models, including AdaBoost, Random Forest, and Logistic Regression. Among the seven ML models evaluated, the AdaBoost classifier demonstrated the highest overall performance, surpassing other classifiers in terms of area under the curve (AUC), F1, accuracy, and coordinate attention (CA) metrics. The incorporation of visual features extracted from CU images using PubMedCLIP further enhanced the model’s performance. Feature importance analysis revealed that laboratory indicators such as free thyroxine (FT4), free triiodothyronine (FT3), and clip_feature_184 were the most influential clinical variables. Additionally, the integration of PubMedCLIP significantly improved the model’s capacity to accurately classify data by leveraging both clinical and imaging information. The proposed PubMedCLIP-based multimodal framework, which jointly utilizes ultrasound imaging features and clinical laboratory data, demonstrated superior diagnostic performance in differentiating benign from malignant TNs. This approach offers a promising tool for individualized risk assessment and clinical decision support, potentially facilitating more precise and personalized protocols for patients with TNs. Not applicable.
Page 150 of 7327315 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.