Sort by:
Page 133 of 2432422 results

Multi-modal convolutional neural network-based thyroid cytology classification and diagnosis.

Yang D, Li T, Li L, Chen S, Li X

pubmed logopapersJul 4 2025
The cytologic diagnosis of thyroid nodules' benign and malignant nature based on cytological smears obtained through ultrasound-guided fine-needle aspiration is crucial for determining subsequent treatment plans. The development of artificial intelligence (AI) can assist pathologists in improving the efficiency and accuracy of cytological diagnoses. We propose a novel diagnostic model based on a network architecture that integrates cytologic images and digital ultrasound image features (CI-DUF) to solve the multi-class classification task of thyroid fine-needle aspiration cytology. We compare this model with a model relying solely on cytologic images (CI) and evaluate its performance and clinical application potential in thyroid cytology diagnosis. A retrospective analysis was conducted on 384 patients with 825 thyroid cytologic images. These images were used as a dataset for training the models, which were divided into training and testing sets in an 8:2 ratio to assess the performance of both the CI and CI-DUF diagnostic models. The AUROC of the CI model for thyroid cytology diagnosis was 0.9119, while the AUROC of the CI-DUF diagnostic model was 0.9326. Compared with the CI model, the CI-DUF model showed significantly increased accuracy, sensitivity, and specificity in the cytologic classification of papillary carcinoma, follicular neoplasm, medullary carcinoma, and benign lesions. The proposed CI-DUF diagnostic model, which intergrates multi-modal information, shows better diagnostic performance than the CI model that relies only on cytologic images, particularly excelling in thyroid cytology classification.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

Deep neural hashing for content-based medical image retrieval: A survey.

Manna A, Sista R, Sheet D

pubmed logopapersJul 3 2025
The ever-growing digital repositories of medical data provide opportunities for advanced healthcare by forming a foundation for a digital healthcare ecosystem. Such an ecosystem facilitates digitized solutions to aspects like early diagnosis, evidence-based treatments, precision medicine, etc. Content-based medical image retrieval (CBMIR) plays a pivotal role in delivering advanced diagnostic healthcare within such an ecosystem. The concept of deep neural hashing (DNH) is introduced with CBMIR systems to aid in faster and more relevant retrievals from such large repositories. The fusion of DNH with CBMIR is an interesting and blooming area whose potential, impact, and methods have not been summarized so far. This survey attempts to summarize this blooming area through an in-depth exploration of the methods of DNH for CBMIR. This survey portrays an end-to-end pipeline for DNH within a CBMIR system. As part of this, concepts like the design of the DNH network, utilizing diverse learning strategies, different loss functions, and evaluation metrics for retrieval performance are discussed in detail. The learning strategies for DNH are further explored by categorizing them based on the loss function into pointwise, pairwise, and triplet-wise. Centered around this categorization, various existing methods are discussed in-depth, mainly focusing on the key contributing aspects of each method. Finally, the future vision for this field is shared in detail by emphasizing three key aspects: current and immediate areas of research, realizing the current and near-future research into practical applications, and finally, some unexplored research topics for the future. In summary, this survey depicts the current state of research and the future vision of the field of CBMIR systems with DNH.

MRI-based habitat, intra-, and peritumoral machine learning model for perineural invasion prediction in rectal cancer.

Zhong J, Huang T, Jiang R, Zhou Q, Wu G, Zeng Y

pubmed logopapersJul 3 2025
This study aimed to analyze preoperative multimodal magnetic resonance images of patients with rectal cancer using habitat-based, intratumoral, peritumoral, and combined radiomics models for non-invasive prediction of perineural invasion (PNI) status. Data were collected from 385 pathologically confirmed rectal cancer cases across two centers. Patients from Center 1 were randomly assigned to training and internal validation groups at an 8:2 ratio; the external validation group comprised patients from Center 2. Tumors were divided into three subregions via K-means clustering. Radiomics features were isolated from intratumoral and peritumoral (3 mm beyond the tumor) regions, as well as subregions, to form a combined dataset based on T2-weighted imaging and diffusion-weighted imaging. The support vector machine algorithm was used to construct seven predictive models. intratumoral, peritumoral, and subregion features were integrated to generate an additional model, referred to as the Total model. For each radiomics feature, its contribution to prediction outcomes was quantified using Shapley values, providing interpretable evidence to support clinical decision-making. The Total combined model outperformed other predictive models in the training, internal validation, and external validation sets (area under the curve values: 0.912, 0.882, and 0.880, respectively). The integration of intratumoral, peritumoral, and subregion features represents an effective approach for predicting PNI in rectal cancer, providing valuable guidance for rectal cancer treatment, along with enhanced clinical decision-making precision and reliability.

PiCME: Pipeline for Contrastive Modality Evaluation and Encoding in the MIMIC Dataset

Michal Golovanevsky, Pranav Mahableshwarkar, Carsten Eickhoff, Ritambhara Singh

arxiv logopreprintJul 3 2025
Multimodal deep learning holds promise for improving clinical prediction by integrating diverse patient data, including text, imaging, time-series, and structured demographics. Contrastive learning facilitates this integration by producing a unified representation that can be reused across tasks, reducing the need for separate models or encoders. Although contrastive learning has seen success in vision-language domains, its use in clinical settings remains largely limited to image and text pairs. We propose the Pipeline for Contrastive Modality Evaluation and Encoding (PiCME), which systematically assesses five clinical data types from MIMIC: discharge summaries, radiology reports, chest X-rays, demographics, and time-series. We pre-train contrastive models on all 26 combinations of two to five modalities and evaluate their utility on in-hospital mortality and phenotype prediction. To address performance plateaus with more modalities, we introduce a Modality-Gated LSTM that weights each modality according to its contrastively learned importance. Our results show that contrastive models remain competitive with supervised baselines, particularly in three-modality settings. Performance declines beyond three modalities, which supervised models fail to recover. The Modality-Gated LSTM mitigates this drop, improving AUROC from 73.19% to 76.93% and AUPRC from 51.27% to 62.26% in the five-modality setting. We also compare contrastively learned modality importance scores with attribution scores and evaluate generalization across demographic subgroups, highlighting strengths in interpretability and fairness. PiCME is the first to scale contrastive learning across all modality combinations in MIMIC, offering guidance for modality selection, training strategies, and equitable clinical prediction.

Multi-task machine learning reveals the functional neuroanatomy fingerprint of mental processing

Wang, Z., Chen, Y., Pan, Y., Yan, J., Mao, W., Xiao, Z., Cao, G., Toussaint, P.-J., Guo, W., Zhao, B., Sun, H., Zhang, T., Evans, A. C., Jiang, X.

biorxiv logopreprintJul 3 2025
Mental processing delineates the functions of human mind encompassing a wide range of motor, sensory, emotional, and cognitive processes, each of which is underlain by the neuroanatomical substrates. Identifying accurate representation of functional neuroanatomy substrates of mental processing could inform understanding of its neural mechanism. The challenge is that it is unclear whether a specific mental process possesses a 'functional neuroanatomy fingerprint', i.e., a unique and reliable pattern of functional neuroanatomy that underlies the mental process. To address this question, we utilized a multi-task deep learning model to disentangle the functional neuroanatomy fingerprint of seven different and representative mental processes including Emotion, Gambling, Language, Motor, Relational, Social, and Working Memory. Results based on the functional magnetic resonance imaging data of two independent cohorts of 1235 subjects from the US and China consistently show that each of the seven mental processes possessed a functional neuroanatomy fingerprint, which is represented by a unique set of functional activity weights of whole-brain regions characterizing the degree of each region involved in the mental process. The functional neuroanatomy fingerprint of a specific mental process exhibits high discrimination ability (93% classification accuracy and AUC of 0.99) with those of the other mental processes, and is robust across different datasets and using different brain atlases. This study provides a solid functional neuroanatomy foundation for investigating the neural mechanism of mental processing.

MedFormer: Hierarchical Medical Vision Transformer with Content-Aware Dual Sparse Selection Attention

Zunhui Xia, Hongxing Li, Libin Lan

arxiv logopreprintJul 3 2025
Medical image recognition serves as a key way to aid in clinical diagnosis, enabling more accurate and timely identification of diseases and abnormalities. Vision transformer-based approaches have proven effective in handling various medical recognition tasks. However, these methods encounter two primary challenges. First, they are often task-specific and architecture-tailored, limiting their general applicability. Second, they usually either adopt full attention to model long-range dependencies, resulting in high computational costs, or rely on handcrafted sparse attention, potentially leading to suboptimal performance. To tackle these issues, we present MedFormer, an efficient medical vision transformer with two key ideas. First, it employs a pyramid scaling structure as a versatile backbone for various medical image recognition tasks, including image classification and dense prediction tasks such as semantic segmentation and lesion detection. This structure facilitates hierarchical feature representation while reducing the computation load of feature maps, highly beneficial for boosting performance. Second, it introduces a novel Dual Sparse Selection Attention (DSSA) with content awareness to improve computational efficiency and robustness against noise while maintaining high performance. As the core building technique of MedFormer, DSSA is explicitly designed to attend to the most relevant content. In addition, a detailed theoretical analysis has been conducted, demonstrating that MedFormer has superior generality and efficiency in comparison to existing medical vision transformers. Extensive experiments on a variety of imaging modality datasets consistently show that MedFormer is highly effective in enhancing performance across all three above-mentioned medical image recognition tasks. The code is available at https://github.com/XiaZunhui/MedFormer.

MvHo-IB: Multi-View Higher-Order Information Bottleneck for Brain Disorder Diagnosis

Kunyu Zhang, Qiang Li, Shujian Yu

arxiv logopreprintJul 3 2025
Recent evidence suggests that modeling higher-order interactions (HOIs) in functional magnetic resonance imaging (fMRI) data can enhance the diagnostic accuracy of machine learning systems. However, effectively extracting and utilizing HOIs remains a significant challenge. In this work, we propose MvHo-IB, a novel multi-view learning framework that integrates both pairwise interactions and HOIs for diagnostic decision-making, while automatically compressing task-irrelevant redundant information. MvHo-IB introduces several key innovations: (1) a principled method that combines O-information from information theory with a matrix-based Renyi alpha-order entropy estimator to quantify and extract HOIs, (2) a purpose-built Brain3DCNN encoder to effectively utilize these interactions, and (3) a new multi-view learning information bottleneck objective to enhance representation learning. Experiments on three benchmark fMRI datasets demonstrate that MvHo-IB achieves state-of-the-art performance, significantly outperforming previous methods, including recent hypergraph-based techniques. The implementation of MvHo-IB is available at https://github.com/zky04/MvHo-IB.

Predicting Ten-Year Clinical Outcomes in Multiple Sclerosis with Radiomics-Based Machine Learning Models.

Tranfa M, Petracca M, Cuocolo R, Ugga L, Morra VB, Carotenuto A, Elefante A, Falco F, Lanzillo R, Moccia M, Scaravilli A, Brunetti A, Cocozza S, Quarantelli M, Pontillo G

pubmed logopapersJul 3 2025
Identifying patients with multiple sclerosis (pwMS) at higher risk of clinical progression is essential to inform clinical management. We aimed to build prognostic models using machine learning (ML) algorithms predicting long-term clinical outcomes based on a systematic mapping of volumetric, radiomic, and macrostructural disconnection features from routine brain MRI scans of pwMS. In this longitudinal monocentric study, 3T structural MRI scans of pwMS were retrospectively analyzed. Based on a ten-year clinical follow-up (average duration=9.4±1.1 years), patients were classified according to confirmed disability progression (CDP) and cognitive impairment (CI) as assessed through the Expanded Disability Status Scale (EDSS) and the Brief International Cognitive Assessment of Multiple Sclerosis (BICAMS) battery, respectively. 3D-T1w and FLAIR images were automatically segmented to obtain volumes, disconnection scores (estimated based on lesion masks and normative tractography data), and radiomic features from 116 gray matter regions defined according to the Automated Anatomical Labelling (AAL) atlas. Three ML algorithms (Extra Trees, Logistic Regression, and Support Vector Machine) were used to build models predicting long-term CDP and CI based on MRI-derived features. Feature selection was performed on the training set with a multi-step process, and models were validated with a holdout approach, randomly splitting the patients into training (75%) and test (25%) sets. We studied 177 pwMS (M/F = 51/126; mean±SD age: 35.2±8.7 years). Long-term CDP and CI were observed in 71 and 55 patients, respectively. Regarding the CDP class prediction analysis, the feature selection identified 13-, 12-, and 10-feature subsets obtaining an accuracy on the test set of 0.71, 0.69, and 0.67 for the Extra Trees, Logistic Regression, and Support Vector Machine classifiers, respectively. Similarly, for the CI prediction, subsets of 16, 17, and 19 features were selected, with 0.69, 0.64, and 0.62 accuracy values on the test set, respectively. There were no significant differences in accuracy between ML models for CDP (p=0.65) or CI (p=0.31). Building on quantitative features derived from conventional MRI scans, we obtained long-term prognostic models, potentially informing patients' stratification and clinical decision-making. MS, multiple sclerosis; pwMS, people with MS; HC, healthy controls; ML, machine learning; DD, disease duration; EDSS, Expanded Disability Status Scale; TLV, total lesion volume; CDP, confirmed disability progression; CI, cognitive impairment; BICAMS, Brief International Cognitive Assessment of Multiple Sclerosis.
Page 133 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.