Sort by:
Page 154 of 2432424 results

Assessment of quantitative staging PET/computed tomography parameters using machine learning for early detection of progression in diffuse large B-cell lymphoma.

Aksu A, Us A, Küçüker KA, Solmaz Ş, Turgut B

pubmed logopapersJun 30 2025
This study aimed to investigate the role of volumetric and dissemination parameters obtained from pretreatment 18-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) in predicting progression/relapse in patients with diffuse large B-cell lymphoma (DLBCL) with machine learning algorithms. Patients diagnosed with DLBCL histopathologically, treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone, and followed for at least 1 year were reviewed retrospectively. Quantitative parameters such as tumor volume [total metabolic tumor volume (tMTV)], tumor burden [total lesion glycolysis (tTLG)], and the longest distance between two tumor foci (Dmax) were obtained from PET images with a standard uptake value threshold of 4.0. The MTV obtained from the volume of interest with the highest volume was noted as metabolic bulk volume (MBV). By analyzing the patients' PET parameters and clinical information with machine learning algorithms, models that attempt to predict progression/recurrence over 1 year were obtained. Of the 90 patients included, 16 had progression within 1 year. Significant differences were found in tMTV, tTLG, MBV, and Dmax values between patients with and without progression. The area under curve (AUC) of the model obtained with clinical data was 0.701. While a model with an AUC of 0.871 was obtained with a random forest algorithm using PET parameters, the model obtained with the Naive Bayes algorithm including clinical data in PET parameters had an AUC of 0.838. Using quantitative parameters derived from staging PET with machine learning algorithms may enable us to detect early progression in patients with DLBCL and improve early risk stratification and guide treatment decisions in these patients.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.

Thoracic staging of lung cancers by <sup>18</sup>FDG-PET/CT: impact of artificial intelligence on the detection of associated pulmonary nodules.

Trabelsi M, Romdhane H, Ben-Sellem D

pubmed logopapersJun 30 2025
This study focuses on automating the classification of certain thoracic lung cancer stages in 3D <sup>18</sup>FDG-PET/CT images according to the 9th Edition of the TNM Classification for Lung Cancer (2024). By leveraging advanced segmentation and classification techniques, we aim to enhance the accuracy of distinguishing between T4 (pulmonary nodules) Thoracic M0 and M1a (pulmonary nodules) stages. Precise segmentation of pulmonary lobes using the Pulmonary Toolkit enables the identification of tumor locations and additional malignant nodules, ensuring reliable differentiation between ipsilateral and contralateral spread. A modified ResNet-50 model is employed to classify the segmented regions. The performance evaluation shows that the model achieves high accuracy. The unchanged class has the best recall 93% and an excellent F1 score 91%. The M1a (pulmonary nodules) class performs well with an F1 score of 94%, though recall is slightly lower 91%. For T4 (pulmonary nodules) Thoracic M0, the model shows balanced performance with an F1 score of 87%. The overall accuracy is 87%, indicating a robust classification model.

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.

Multicenter Evaluation of Interpretable AI for Coronary Artery Disease Diagnosis from PET Biomarkers

Zhang, W., Kwiecinski, J., Shanbhag, A., Miller, R. J., Ramirez, G., Yi, J., Han, D., Dey, D., Grodecka, D., Grodecki, K., Lemley, M., Kavanagh, P., Liang, J. X., Zhou, J., Builoff, V., Hainer, J., Carre, S., Barrett, L., Einstein, A. J., Knight, S., Mason, S., Le, V., Acampa, W., Wopperer, S., Chareonthaitawee, P., Berman, D. S., Di Carli, M. F., Slomka, P.

medrxiv logopreprintJun 30 2025
BackgroundPositron emission tomography (PET)/CT for myocardial perfusion imaging (MPI) provides multiple imaging biomarkers, often evaluated separately. We developed an artificial intelligence (AI) model integrating key clinical PET MPI parameters to improve the diagnosis of obstructive coronary artery disease (CAD). MethodsFrom 17,348 patients undergoing cardiac PET/CT across four sites, we retrospectively enrolled 1,664 subjects who had invasive coronary angiography within 180 days and no prior CAD. Deep learning was used to derive coronary artery calcium score (CAC) from CT attenuation correction maps. XGBoost machine learning model was developed using data from one site to detect CAD, defined as left main stenosis [&ge;]50% or [&ge;]70% in other arteries. The model utilized 10 image-derived parameters from clinical practice: CAC, stress/rest left ventricle ejection fraction, stress myocardial blood flow (MBF), myocardial flow reserve (MFR), ischemic and stress total perfusion deficit (TPD), transient ischemic dilation ratio, rate pressure product, and sex. Generalizability was evaluated in the remaining three sites--chosen to maximize testing power and capture inter-site variability--and model performance was compared with quantitative analyses using the area under the receiver operating characteristic curve (AUC). Patient-specific predictions were explained using shapley additive explanations. ResultsThere was a 61% and 53% CAD prevalence in the training (n=386) and external testing (n=1,278) set, respectively. In the external evaluation, the AI model achieved a higher AUC (0.83 [95% confidence interval (CI): 0.81-0.85]) compared to clinical score by experienced physicians (0.80 [0.77-0.82], p=0.02), ischemic TPD (0.79 [0.77-0.82], p<0.001), MFR (0.75 [0.72-0.78], p<0.001), and CAC (0.69 [0.66-0.72], p<0.001). The models performances were consistent in sex, body mass index, and age groups. The top features driving the prediction were stress/ischemic TPD, CAC, and MFR. ConclusionAI integrating perfusion, flow, and CAC scoring improves PET MPI diagnostic accuracy, offering automated and interpretable predictions for CAD diagnosis.

Machine learning methods for sex estimation of sub-adults using cranial computed tomography images.

Syed Mohd Hamdan SN, Faizal Abdullah ERM, Wen KJ, Al-Adawiyah Rahmat R, Wan Ibrahim WI, Abd Kadir KA, Ibrahim N

pubmed logopapersJun 30 2025
This research aimed to compare the classification accuracy of three machine learning (ML) methods (random forest (RF), support vector machines (SVM), linear discriminant analysis (LDA)) for sex estimation of sub-adults using cranial computed tomography (CCT) images. A total of 521 CCT scans from sub-adult Malaysians aged 0 to 20 were analysed using Mimics software (Materialise Mimics Ver. 21). Plane-to-plane (PTP) protocol was used for measuring 14 chosen craniometric parameters. A trio of machine learning algorithms RF, SVM, and LDA with GridSearchCV was used to produce classification models for sex estimation. In addition, performance was measured in the form of accuracy, precision, recall, and F1-score, among others. RF produced testing accuracy of 73%, with the best hyperparameters of max_depth = 6, max_samples = 40, and n_estimators = 45. SVM obtained an accuracy of 67% with the best hyperparameters: learning rate (C) = 10, gamma = 0.01, and kernel = radial basis function (RBF). LDA obtained the lowest accuracy of 65% with shrinkage of 0.02. Among the tested ML methods, RF showed the highest testing accuracy in comparison to SVM and LDA. This is the first AI-based classification model that can be used for estimating sex in sub-adults using CCT scans.

Prediction Crohn's Disease Activity Using Computed Tomography Enterography-Based Radiomics and Serum Markers.

Wang P, Liu Y, Wang Y

pubmed logopapersJun 30 2025
Accurate stratification of the activity index of Crohn's disease (CD) using computed tomography enterography (CTE) radiomics and serum markers can aid in predicting disease progression and assist physicians in personalizing therapeutic regimens for patients with CD. This retrospective study enrolled 233 patients diagnosed with CD between January 2019 and August 2024. Patients were divided into training and testing cohorts at a ratio of 7:3 and further categorized into remission, mild active phase, and moderate-severe active phase groups based on simple endoscopic score for CD (SEC-CD). Radiomics features were extracted from CTE venous images, and T-test and least absolute shrinkage and selection operator (LASSO) regression were applied for feature selection. The serum markers were selected based on the variance analysis. We also developed a random forest (RF) model for multi-class stratification of CD. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC) and quantified the contribution of each feature in the dataset to CD activity via Shapley additive exPlanations (SHAP) values. Finally, we enrolled gender, radiomics scores, and serum scores to develop a nomogram model to verify the effectiveness of feature extraction. 14 non-zero coefficient radiomics features and six serum markers with significant differences (P<0.01) were ultimately selected to predict CD activity. The AUC (micro/macro) for the ensemble machine learning model combining the radiomics features and serum markers is 0.931/0.928 for three-class. The AUC for the remission phase, the mild active phase, and the moderate-severe active phase were 0.983, 0.852, and 0.917, respectively. The mean AUC for the nomogram model was 0.940. A radiomics model was developed by integrating radiomics and serum markers of CD patients, achieving enhanced consistency with SEC-CD in grade CD. This model has the potential to assist clinicians in accurate diagnosis and treatment.

Three-dimensional end-to-end deep learning for brain MRI analysis

Radhika Juglan, Marta Ligero, Zunamys I. Carrero, Asier Rabasco, Tim Lenz, Leo Misera, Gregory Patrick Veldhuizen, Paul Kuntke, Hagen H. Kitzler, Sven Nebelung, Daniel Truhn, Jakob Nikolas Kather

arxiv logopreprintJun 30 2025
Deep learning (DL) methods are increasingly outperforming classical approaches in brain imaging, yet their generalizability across diverse imaging cohorts remains inadequately assessed. As age and sex are key neurobiological markers in clinical neuroscience, influencing brain structure and disease risk, this study evaluates three of the existing three-dimensional architectures, namely Simple Fully Connected Network (SFCN), DenseNet, and Shifted Window (Swin) Transformers, for age and sex prediction using T1-weighted MRI from four independent cohorts: UK Biobank (UKB, n=47,390), Dallas Lifespan Brain Study (DLBS, n=132), Parkinson's Progression Markers Initiative (PPMI, n=108 healthy controls), and Information eXtraction from Images (IXI, n=319). We found that SFCN consistently outperformed more complex architectures with AUC of 1.00 [1.00-1.00] in UKB (internal test set) and 0.85-0.91 in external test sets for sex classification. For the age prediction task, SFCN demonstrated a mean absolute error (MAE) of 2.66 (r=0.89) in UKB and 4.98-5.81 (r=0.55-0.70) across external datasets. Pairwise DeLong and Wilcoxon signed-rank tests with Bonferroni corrections confirmed SFCN's superiority over Swin Transformer across most cohorts (p<0.017, for three comparisons). Explainability analysis further demonstrates the regional consistency of model attention across cohorts and specific to each task. Our findings reveal that simpler convolutional networks outperform the denser and more complex attention-based DL architectures in brain image analysis by demonstrating better generalizability across different datasets.

Self-Supervised Multiview Xray Matching

Mohamad Dabboussi, Malo Huard, Yann Gousseau, Pietro Gori

arxiv logopreprintJun 30 2025
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.
Page 154 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.