Sort by:
Page 117 of 2412410 results

Digitalization of Prison Records Supports Artificial Intelligence Application.

Whitford WG

pubmed logopapersJul 14 2025
Artificial intelligence (AI)-empowered data processing tools improve our ability to assess, measure, and enhance medical interventions. AI-based tools automate the extraction of data from histories, test results, imaging, prescriptions, and treatment outcomes, and transform them into unified, accessible records. They are powerful in converting unstructured data such as clinical notes, magnetic resonance images, and electroencephalograms into structured, actionable formats. For example, in the extraction and classification of diseases, symptoms, medications, treatments, and dates from even incomplete and fragmented clinical notes, pathology reports, images, and histological markers. Especially because the demographics within correctional facilities greatly diverge from the general population, the adoption of electronic health records and AI-enabled data processing will play a crucial role in improving disease detection, treatment management, and the overall efficiency of health care within prison systems.

Multimodal Deep Learning Model Based on Ultrasound and Cytological Images Predicts Risk Stratification of cN0 Papillary Thyroid Carcinoma.

He F, Chen S, Liu X, Yang X, Qin X

pubmed logopapersJul 14 2025
Accurately assessing the risk stratification of cN0 papillary thyroid carcinoma (PTC) preoperatively aids in making treatment decisions. We integrated preoperative ultrasound and cytological images of patients to develop and validate a multimodal deep learning (DL) model for non-invasive assessment of N0 PTC risk stratification before surgery. In this retrospective multicenter group study, we developed a comprehensive DL model based on ultrasound and cytological images. The model was trained and validated on 890 PTC patients undergoing thyroidectomy and lymph node dissection across five medical centers. The testing group included 107 patients from one medical center. We analyzed the model's performance, including the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The combined DL model demonstrated strong performance, with an area under the curve (AUC) of 0.922 (0.866-0.979) in the internal validation group and an AUC of 0.845 (0.794-0.895) in the testing group. The diagnostic performance of the combined DL model surpassed that of clinical models. Image region heatmaps assisted in interpreting the diagnosis of risk stratification. The multimodal DL model based on ultrasound and cytological images can accurately determine the risk stratification of N0 PTC and guide treatment decisions.

STF: A Spherical Transformer for Versatile Cortical Surfaces Applications.

Cheng J, Zhao F, Wu Z, Yuan X, Wang L, Gilmore JH, Lin W, Zhang X, Li G

pubmed logopapersJul 14 2025
Inspired by the remarkable success of attention mechanisms in various applications, there is a growing need to adapt the Transformer architecture from conventional Euclidean domains to non-Euclidean spaces commonly encountered in medical imaging. Structures such as brain cortical surfaces, represented by triangular meshes, exhibit spherical topology and present unique challenges. To address this, we propose the Spherical Transformer (STF), a versatile backbone that leverages self-attention for analyzing cortical surface data. Our approach involves mapping cortical surfaces onto a sphere, dividing them into overlapping patches, and tokenizing both patches and vertices. By performing self-attention at patch and vertex levels, the model simultaneously captures global dependencies and preserves fine-grained contextual information within each patch. Overlapping regions between neighboring patches naturally enable efficient cross-patch information sharing. To handle longitudinal cortical surface data, we introduce the spatiotemporal self-attention mechanism, which jointly captures spatial context and temporal developmental patterns within a single layer. This innovation enhances the representational power of the model, making it well-suited for dynamic surface data. We evaluate the Spherical Transformer on key tasks, including cognition prediction at the surface level and two vertex-level tasks: cortical surface parcellation and cortical property map prediction. Across these applications, our model consistently outperforms state-of-the-art methods, demonstrating its ability to effectively model global dependencies and preserve detailed spatial information. The results highlight its potential as a general-purpose framework for cortical surface analysis.

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

Predicting the molecular subtypes of 2021 WHO grade 4 glioma by a multiparametric MRI-based machine learning model.

Xu W, Li Y, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersJul 14 2025
Accurately distinguishing the different molecular subtypes of 2021 World Health Organization (WHO) grade 4 Central Nervous System (CNS) gliomas is highly relevant for prognostic stratification and personalized treatment. To develop and validate a machine learning (ML) model using multiparametric MRI for the preoperative differentiation of astrocytoma, CNS WHO grade 4, and glioblastoma (GBM), isocitrate dehydrogenase-wild-type (IDH-wt) (WHO 2021) (Task 1:grade 4 vs. GBM); and to stratify astrocytoma, CNS WHO grade 4, by distinguish astrocytoma, IDH-mutant (IDH-mut), CNS WHO grade 4 from astrocytoma, IDH-wild-type (IDH-wt), CNS WHO grade 4 (Task 2:IDH-mut <sup>grade 4</sup> vs. IDH-wt <sup>grade 4</sup>). Additionally, to evaluate the model's prognostic value. We retrospectively analyzed 320 glioma patients from three hospitals (training/testing, 7:3 ratio) and 99 patients from ‌The Cancer Genome Atlas (TCGA) database for external validation‌. Radiomic features were extracted from tumor and edema on contrast-enhanced T1-weighted imaging (CE-T1WI) and T2 fluid-attenuated inversion recovery (T2-FLAIR). Extreme gradient boosting (XGBoost) was utilized for constructing the ML, clinical, and combined models. Model performance was evaluated with receiver operating characteristic (ROC) curves, decision curves, and calibration curves. Stability was evaluated using six additional classifiers. Kaplan-Meier (KM) survival analysis and the log-rank test assessed the model's prognostic value. In Task 1 and Task 2, the combined model (AUC = 0.907, 0.852 and 0.830 for Task 1; AUC = 0.899, 0.895 and 0.792 for Task 2) and the optimal ML model (AUC = 0.902, 0.854 and 0.832 for Task 1; AUC = 0.904, 0.899 and 0.783 for Task 2) significantly outperformed the clinical model (AUC = 0.671, 0.656, and 0.543 for Task 1; AUC = 0.619, 0.605 and 0.400 for Task 2) in both the training, testing and validation sets. Survival analysis showed the combined model performed similarly to molecular subtype in both tasks (p = 0.964 and p = 0.746). The multiparametric MRI ML model effectively distinguished astrocytoma, CNS WHO grade 4 from GBM, IDH-wt (WHO 2021) and differentiated astrocytoma, IDH-mut from astrocytoma, IDH-wt, CNS WHO grade 4. Additionally, the model provided reliable survival stratification for glioma patients across different molecular subtypes.

A hybrid learning approach for MRI-based detection of alzheimer's disease stages using dual CNNs and ensemble classifier.

Zolfaghari S, Joudaki A, Sarbaz Y

pubmed logopapersJul 14 2025
Alzheimer's Disease (AD) and related dementias are significant global health issues characterized by progressive cognitive decline and memory loss. Computer-aided systems can help physicians in the early and accurate detection of AD, enabling timely intervention and effective management. This study presents a combination of two parallel Convolutional Neural Networks (CNNs) and an ensemble learning method for classifying AD stages using Magnetic Resonance Imaging (MRI) data. Initially, these images were resized and augmented before being input into Network 1 and Network 2, which have different structures and layers to extract important features. These features were then fused and fed into an ensemble learning classifier containing Support Vector Machine, Random Forest, and K-Nearest Neighbors, with hyperparameters optimized by the Grid Search Cross-Validation technique. Considering distinct Network 1 and Network 2 along with ensemble learning, four classes were identified with accuracies of 95.16% and 97.97%, respectively. However, using the derived features from both networks resulted in an acceptable classification accuracy of 99.06%. These findings imply the potential of the proposed hybrid approach in the classification of AD stages. As the evaluation was conducted at the slice-level using a Kaggle dataset, additional subject-level validation and clinical testing are required to determine its real-world applicability.

Classification of Renal Lesions by Leveraging Hybrid Features from CT Images Using Machine Learning Techniques.

Kaur R, Khattar S, Singla S

pubmed logopapersJul 14 2025
Renal cancer is amid the several reasons of increasing mortality rates globally, which can be reduced by early detection and diagnosis. The classification of lesions is based mostly on their characteristics, which include varied shape and texture properties. Computed tomography (CT) imaging is a regularly used imaging modality for study of the renal soft tissues. Furthermore, a radiologist's ability to assess a corpus of CT images is limited, which can lead to misdiagnosis of kidney lesions, which might lead to cancer progression or unnecessary chemotherapy. To address these challenges, this study presents a machine learning technique based on a novel feature vector for the automated classification of renal lesions using a multi-model texture-based feature extraction. The proposed feature vector could serve as an integral component in improving the accuracy of a computer aided diagnosis (CAD) system for identifying the texture of renal lesion and can assist physicians in order to provide more precise lesion interpretation. In this work, the authors employed different texture models for the analysis of CT scans, in order to classify benign and malignant kidney lesions. Texture analysis is performed using features such as first-order statistics (FoS), spatial gray level co-occurrence matrix (SGLCM), Fourier power spectrum (FPS), statistical feature matrix (SFM), Law's texture energy measures (TEM), gray level difference statistics (GLDS), fractal, and neighborhood gray tone difference matrix (NGTDM). Multiple texture models were utilized to quantify the renal texture patterns, which used image texture analysis on a selected region of interest (ROI) from the renal lesions. In addition, dimensionality reduction is employed to discover the most discriminative features for categorization of benign and malignant lesions, and a unique feature vector based on correlation-based feature selection, information gain, and gain ratio is proposed. Different machine learning-based classifiers were employed to test the performance of the proposed features, out of which the random forest (RF) model outperforms all other techniques to distinguish benign from malignant tumors in terms of distinct performance evaluation metrics. The final feature set is evaluated using various machine learning classifiers, with the RF model achieving the highest performance. The proposed system is validated on a dataset of 50 subjects, achieving a classification accuracy of 95.8%, outperforming other conventional models.

A radiomics-clinical predictive model for difficult laparoscopic cholecystectomy based on preoperative CT imaging: a retrospective single center study.

Sun RT, Li CL, Jiang YM, Hao AY, Liu K, Li K, Tan B, Yang XN, Cui JF, Bai WY, Hu WY, Cao JY, Qu C

pubmed logopapersJul 14 2025
Accurately identifying difficult laparoscopic cholecystectomy (DLC) preoperatively remains a clinical challenge. Previous studies utilizing clinical variables or morphological imaging markers have demonstrated suboptimal predictive performance. This study aims to develop an optimal radiomics-clinical model by integrating preoperative CT-based radiomics features with clinical characteristics. A retrospective analysis was conducted on 2,055 patients who underwent laparoscopic cholecystectomy (LC) for cholecystitis at our center. Preoperative CT images were processed with super-resolution reconstruction to improve consistency, and high-throughput radiomic features were extracted from the gallbladder wall region. A combination of radiomic and clinical features was selected using the Boruta-LASSO algorithm. Predictive models were constructed using six machine learning algorithms and validated, with model performance evaluated based on the AUC, accuracy, Brier score, and DCA to identify the optimal model. Model interpretability was further enhanced using the SHAP method. The Boruta-LASSO algorithm identified 10 key radiomic and clinical features for model construction, including the Rad-Score, gallbladder wall thickness, fibrinogen, C-reactive protein, and low-density lipoprotein cholesterol. Among the six machine learning models developed, the radiomics-clinical model based on the random forest algorithm demonstrated the best predictive performance, with an AUC of 0.938 in the training cohort and 0.874 in the validation cohort. The Brier score, calibration curve, and DCA confirmed the superior predictive capability of this model, significantly outperforming previously published models. The SHAP analysis further visualized the importance of features, enhancing model interpretability. This study developed the first radiomics-clinical random forest model for the preoperative prediction of DLC by machine learning algorithms. This predictive model supports safer and individualized surgical planning and treatment strategies.

Comparing large language models and text embedding models for automated classification of textual, semantic, and critical changes in radiology reports.

Lindholz M, Burdenski A, Ruppel R, Schulze-Weddige S, Baumgärtner GL, Schobert I, Haack AM, Eminovic S, Milnik A, Hamm CA, Frisch A, Penzkofer T

pubmed logopapersJul 14 2025
Radiology reports can change during workflows, especially when residents draft preliminary versions that attending physicians finalize. We explored how large language models (LLMs) and embedding techniques can categorize these changes into textual, semantic, or clinically actionable types. We evaluated 400 adult CT reports drafted by residents against finalized versions by attending physicians. Changes were rated on a five-point scale from no changes to critical ones. We examined open-source LLMs alongside traditional metrics like normalized word differences, Levenshtein and Jaccard similarity, and text embedding similarity. Model performance was assessed using quadratic weighted Cohen's kappa (κ), (balanced) accuracy, F<sub>1</sub>, precision, and recall. Inter-rater reliability among evaluators was excellent (κ = 0.990). Of the reports analyzed, 1.3 % contained critical changes. The tested methods showed significant performance differences (P < 0.001). The Qwen3-235B-A22B model using a zero-shot prompt, most closely aligned with human assessments of changes in clinical reports, achieving a κ of 0.822 (SD 0.031). The best conventional metric, word difference, had a κ of 0.732 (SD 0.048), the difference between the two showed statistical significance in unadjusted post-hoc tests (P = 0.038) but lost significance after adjusting for multiple testing (P = 0.064). Embedding models underperformed compared to LLMs and classical methods, showing statistical significance in most cases. Large language models like Qwen3-235B-A22B demonstrated moderate to strong alignment with expert evaluations of the clinical significance of changes in radiology reports. LLMs outperformed embedding methods and traditional string and word approaches, achieving statistical significance in most instances. This demonstrates their potential as tools to support peer review.
Page 117 of 2412410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.