Sort by:
Page 88 of 2382377 results

Deep Learning-Enabled Ultrasound for Advancing Anterior Talofibular Ligament Injuries Classification: A Multicenter Model Development and Validation Study.

Shi X, Zhang H, Yuan Y, Xu Z, Meng L, Xi Z, Qiao Y, Liu S, Sun J, Cui J, Du R, Yu Q, Wang D, Shen S, Gao C, Li P, Bai L, Xu H, Wang K

pubmed logopapersAug 4 2025
Ultrasound (US) is the preferred modality for assessing anterior talofibular ligament (ATFL) injuries. We aimed to advance ATFL injuries classification by developing a US-based deep learning (DL) model, and explore how artificial intelligence (AI) could help radiologists improve diagnostic performance. Consecutive healthy controls and patients with acute ATFL injuries (mild strain, partial tear, complete tear, and avulsion fracture) at 10 hospitals were retrospectively included. A US-based DL model (ATFLNet) was trained (n=2566), internally validated (n=642), and externally validated (n=717 and 493). Surgical or radiological findings based on the majority consensus of three experts served as the reference standard. Prospective validation was conducted at three additional hospitals (n=472). The performance was compared to that of 12 radiologists at different levels (external validation sets 1 and 2); an ATFLNet-aided strategy was developed, comparing with the radiologists when reviewing B-mode images (external validation set 2); the strategy was then tested in a simulated scenario (reviewing images alongside dynamic clips; prospective validation set). Statistical comparisons were performed using the McNemar's test, while inter-reader agreement was evaluated with the Multireader Fleiss κ statistic. ATFLNet obtained macro-average area under the curve ≥0.970 across all five classes in each dataset, indicating robust overall performance. Additionally, it consistently outperformed senior radiologists in external validation sets (all p<.05). ATFLNet-aided strategy improved radiologists' average accuracy (0.707 vs. 0.811, p<.001) for image review. In the simulated scenario, it led to enhanced accuracy (0.794 to 0.864, p=.003), and a reduction in diagnostic variability, particularly for junior radiologists. Our US-based model outperformed human experts for ATFL injury evaluation. AI-aided strategies hold the potential to enhance diagnostic performance in real-world clinical scenarios.

A Novel Deep Learning Radiomics Nomogram Integrating B-Mode Ultrasound and Contrast-Enhanced Ultrasound for Preoperative Prediction of Lymphovascular Invasion in Invasive Breast Cancer.

Niu R, Chen Z, Li Y, Fang Y, Gao J, Li J, Li S, Huang S, Zou X, Fu N, Jin Z, Shao Y, Li M, Kang Y, Wang Z

pubmed logopapersAug 4 2025
This study aimed to develop a deep learning radiomics nomogram (DLRN) that integrated B-mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images for preoperative lymphovascular invasion (LVI) prediction in invasive breast cancer (IBC). Total 981 patients with IBC from three hospitals were retrospectively enrolled. Of 834 patients recruited from Hospital I, 688 were designated as the training cohort and 146 as the internal test cohort, whereas 147 patients from Hospitals II and III were assigned to constitute the external test cohort. Deep learning and handcrafted radiomics features of BMUS and CEUS images were extracted from breast cancer to construct a deep learning radiomics (DLR) signature. The DLRN was developed by integrating the DLR signature and independent clinicopathological parameters. The performance of the DLRN is evaluated with respect to discrimination, calibration, and clinical benefit. The DLRN exhibited good performance in predicting LVI, with areas under the receiver operating characteristic curves (AUCs) of 0.885 (95% confidence interval [CI,0.858-0.912), 0.914 (95% CI, 0.868-0.960) and 0.914 (95% CI, 0.867-0.960) in the training, internal test, and external test cohorts, respectively. The DLRN exhibited good stability and clinical practicability, as demonstrated by the calibration curve and decision curve analysis. In addition, the DLRN outperformed the traditional clinical model and the DLR signature for LVI prediction in the internal and external test cohorts (all p < 0.05). The DLRN exhibited good performance in predicting LVI, representing a non-invasive approach to preoperatively determining LVI status in IBC.

Analysis on artificial intelligence-based chest computed tomography in multidisciplinary treatment models for discriminating benign and malignant pulmonary nodules.

Liu XY, Shan FC, Li H, Zhu JB

pubmed logopapersAug 4 2025
To evaluate the effectiveness of AI-based chest Computed Tomography (CT) in a Multidisciplinary Diagnosis and Treatment (MDT) model for differentiating benign and malignant pulmonary nodules. This retrospective study screened a total of 87 patients with pulmonary nodules who were treated between January 2019 and December 2020 at Binzhou People's Hospital, Qingdao Municipal Hospital, and Laiwu People's Hospital. AI analysis, MDT consultation, and a combined diagnostic approach were assessed using postoperative pathology as the reference standard. Among 87 nodules, 69 (79.31 %) were malignant, and 18 (20.69 %) were benign. AI analysis showed moderate agreement with pathology (κ = 0.637, p < 0.05), while MDT and the combined approach demonstrated higher consistency (κ = 0.847, 0.888, p < 0.05). Sensitivity and specificity were as follows: AI (89.86 %, 77.78 %, AUC = 0.838), MDT (100 %, 77.78 %, AUC = 0.889), and the combined approach (100 %, 83.33 %, AUC = 0.917). The accuracy of the combined method (96.55 %) was superior to MDT (95.40 %) and AI alone (87.36 %) (p < 0.05). AI-based chest CT combined with MDT may improve diagnostic accuracy and shows potential for broader clinical application.

External evaluation of an open-source deep learning model for prostate cancer detection on bi-parametric MRI.

Johnson PM, Tong A, Ginocchio L, Del Hoyo JL, Smereka P, Harmon SA, Turkbey B, Chandarana H

pubmed logopapersAug 3 2025
This study aims to evaluate the diagnostic accuracy of an open-source deep learning (DL) model for detecting clinically significant prostate cancer (csPCa) in biparametric MRI (bpMRI). It also aims to outline the necessary components of the model that facilitate effective sharing and external evaluation of PCa detection models. This retrospective diagnostic accuracy study evaluated a publicly available DL model trained to detect PCa on bpMRI. External validation was performed on bpMRI exams from 151 biologically male patients (mean age, 65 ± 8 years). The model's performance was evaluated using patient-level classification of PCa with both radiologist interpretation and histopathology serving as the ground truth. The model processed bpMRI inputs to generate lesion probability maps. Performance was assessed using the area under the receiver operating characteristic curve (AUC) for PI-RADS ≥ 3, PI-RADS ≥ 4, and csPCa (defined as Gleason ≥ 7) at an exam level. The model achieved AUCs of 0.86 (95% CI: 0.80-0.92) and 0.91 (95% CI: 0.85-0.96) for predicting PI-RADS ≥ 3 and ≥ 4 exams, respectively, and 0.78 (95% CI: 0.71-0.86) for csPCa. Sensitivity and specificity for csPCa were 0.87 and 0.53, respectively. Fleiss' kappa for inter-reader agreement was 0.51. The open-source DL model offers high sensitivity to clinically significant prostate cancer. The study underscores the importance of sharing model code and weights to enable effective external validation and further research. Question Inter-reader variability hinders the consistent and accurate detection of clinically significant prostate cancer in MRI. Findings An open-source deep learning model demonstrated reproducible diagnostic accuracy, achieving AUCs of 0.86 for PI-RADS ≥ 3 and 0.78 for CsPCa lesions. Clinical relevance The model's high sensitivity for MRI-positive lesions (PI-RADS ≥ 3) may provide support for radiologists. Its open-source deployment facilitates further development and evaluation across diverse clinical settings, maximizing its potential utility.

Functional immune state classification of unlabeled live human monocytes using holotomography and machine learning

Lee, M., Kim, G., Lee, M. S., Shin, J. W., Lee, J. H., Ryu, D. H., Kim, Y. S., Chung, Y., Kim, K. S., Park, Y.

biorxiv logopreprintAug 3 2025
Sepsis is an abnormally dysregulated immune response against infection in which the human immune system ranges from a hyper-inflammatory phase to an immune-suppressive phase. Current assessment methods are limiting owing to time-consuming and laborious sample preparation protocols. We propose a rapid label-free imaging-based technique to assess the immune status of individual human monocytes. High-resolution intracellular compositions of individual monocytes are quantitatively measured in terms of the three-dimensional distribution of refractive index values using holotomography, which are then analyzed using machine-learning algorithms to train for the classification into three distinct immune states: normal, hyper-inflammation, and immune suppression. The immune status prediction accuracy of the machine-learning holotomography classifier was 83.7% and 99.9% for one and six cell measurements, respectively. Our results suggested that this technique can provide a rapid deterministic method for the real-time evaluation of the immune status of an individual.

Medical Image De-Identification Resources: Synthetic DICOM Data and Tools for Validation

Michael W. Rutherford, Tracy Nolan, Linmin Pei, Ulrike Wagner, Qinyan Pan, Phillip Farmer, Kirk Smith, Benjamin Kopchick, Laura Opsahl-Ong, Granger Sutton, David Clunie, Keyvan Farahani, Fred Prior

arxiv logopreprintAug 3 2025
Medical imaging research increasingly depends on large-scale data sharing to promote reproducibility and train Artificial Intelligence (AI) models. Ensuring patient privacy remains a significant challenge for open-access data sharing. Digital Imaging and Communications in Medicine (DICOM), the global standard data format for medical imaging, encodes both essential clinical metadata and extensive protected health information (PHI) and personally identifiable information (PII). Effective de-identification must remove identifiers, preserve scientific utility, and maintain DICOM validity. Tools exist to perform de-identification, but few assess its effectiveness, and most rely on subjective reviews, limiting reproducibility and regulatory confidence. To address this gap, we developed an openly accessible DICOM dataset infused with synthetic PHI/PII and an evaluation framework for benchmarking image de-identification workflows. The Medical Image de-identification (MIDI) dataset was built using publicly available de-identified data from The Cancer Imaging Archive (TCIA). It includes 538 subjects (216 for validation, 322 for testing), 605 studies, 708 series, and 53,581 DICOM image instances. These span multiple vendors, imaging modalities, and cancer types. Synthetic PHI and PII were embedded into structured data elements, plain text data elements, and pixel data to simulate real-world identity leaks encountered by TCIA curation teams. Accompanying evaluation tools include a Python script, answer keys (known truth), and mapping files that enable automated comparison of curated data against expected transformations. The framework is aligned with the HIPAA Privacy Rule "Safe Harbor" method, DICOM PS3.15 Confidentiality Profiles, and TCIA best practices. It supports objective, standards-driven evaluation of de-identification workflows, promoting safer and more consistent medical image sharing.

M$^3$AD: Multi-task Multi-gate Mixture of Experts for Alzheimer's Disease Diagnosis with Conversion Pattern Modeling

Yufeng Jiang, Hexiao Ding, Hongzhao Chen, Jing Lan, Xinzhi Teng, Gerald W. Y. Cheng, Zongxi Li, Haoran Xie, Jung Sun Yoo, Jing Cai

arxiv logopreprintAug 3 2025
Alzheimer's disease (AD) progression follows a complex continuum from normal cognition (NC) through mild cognitive impairment (MCI) to dementia, yet most deep learning approaches oversimplify this into discrete classification tasks. This study introduces M$^3$AD, a novel multi-task multi-gate mixture of experts framework that jointly addresses diagnostic classification and cognitive transition modeling using structural MRI. We incorporate three key innovations: (1) an open-source T1-weighted sMRI preprocessing pipeline, (2) a unified learning framework capturing NC-MCI-AD transition patterns with demographic priors (age, gender, brain volume) for improved generalization, and (3) a customized multi-gate mixture of experts architecture enabling effective multi-task learning with structural MRI alone. The framework employs specialized expert networks for diagnosis-specific pathological patterns while shared experts model common structural features across the cognitive continuum. A two-stage training protocol combines SimMIM pretraining with multi-task fine-tuning for joint optimization. Comprehensive evaluation across six datasets comprising 12,037 T1-weighted sMRI scans demonstrates superior performance: 95.13% accuracy for three-class NC-MCI-AD classification and 99.15% for binary NC-AD classification, representing improvements of 4.69% and 0.55% over state-of-the-art approaches. The multi-task formulation simultaneously achieves 97.76% accuracy in predicting cognitive transition. Our framework outperforms existing methods using fewer modalities and offers a clinically practical solution for early intervention. Code: https://github.com/csyfjiang/M3AD.

TopoImages: Incorporating Local Topology Encoding into Deep Learning Models for Medical Image Classification

Pengfei Gu, Hongxiao Wang, Yejia Zhang, Huimin Li, Chaoli Wang, Danny Chen

arxiv logopreprintAug 3 2025
Topological structures in image data, such as connected components and loops, play a crucial role in understanding image content (e.g., biomedical objects). % Despite remarkable successes of numerous image processing methods that rely on appearance information, these methods often lack sensitivity to topological structures when used in general deep learning (DL) frameworks. % In this paper, we introduce a new general approach, called TopoImages (for Topology Images), which computes a new representation of input images by encoding local topology of patches. % In TopoImages, we leverage persistent homology (PH) to encode geometric and topological features inherent in image patches. % Our main objective is to capture topological information in local patches of an input image into a vectorized form. % Specifically, we first compute persistence diagrams (PDs) of the patches, % and then vectorize and arrange these PDs into long vectors for pixels of the patches. % The resulting multi-channel image-form representation is called a TopoImage. % TopoImages offers a new perspective for data analysis. % To garner diverse and significant topological features in image data and ensure a more comprehensive and enriched representation, we further generate multiple TopoImages of the input image using various filtration functions, which we call multi-view TopoImages. % The multi-view TopoImages are fused with the input image for DL-based classification, with considerable improvement. % Our TopoImages approach is highly versatile and can be seamlessly integrated into common DL frameworks. Experiments on three public medical image classification datasets demonstrate noticeably improved accuracy over state-of-the-art methods.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

Integrating Time and Frequency Domain Features of fMRI Time Series for Alzheimer's Disease Classification Using Graph Neural Networks.

Peng W, Li C, Ma Y, Dai W, Fu D, Liu L, Liu L, Yu N, Liu J

pubmed logopapersAug 2 2025
Accurate and early diagnosis of Alzheimer's Disease (AD) is crucial for timely interventions and treatment advancement. Functional Magnetic Resonance Imaging (fMRI), measuring brain blood-oxygen level changes over time, is a powerful AD-diagnosis tool. However, current fMRI-based AD diagnosis methods rely on noise-susceptible time-domain features and focus only on synchronous brain-region interactions in the same time phase, neglecting asynchronous ones. To overcome these issues, we propose Frequency-Time Fusion Graph Neural Network (FTF-GNN). It integrates frequency- and time-domain features for robust AD classification, considering both asynchronous and synchronous brain-region interactions. First, we construct a fully connected hypervariate graph, where nodes represent brain regions and their Blood Oxygen Level-Dependent (BOLD) values at a time series point. A Discrete Fourier Transform (DFT) transforms these BOLD values from the spatial to the frequency domain for frequency-component analysis. Second, a Fourier-based Graph Neural Network (FourierGNN) processes the frequency features to capture asynchronous brain region connectivity patterns. Third, these features are converted back to the time domain and reshaped into a matrix where rows represent brain regions and columns represent their frequency-domain features at each time point. Each brain region then fuses its frequency-domain features with position encoding along the time series, preserving temporal and spatial information. Next, we build a brain-region network based on synchronous BOLD value associations and input the brain-region network and the fused features into a Graph Convolutional Network (GCN) to capture synchronous brain region connectivity patterns. Finally, a fully connected network classifies the brain-region features. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate the method's effectiveness: Our model achieves 91.26% accuracy and 96.79% AUC in AD versus Normal Control (NC) classification, showing promising performance. For early-stage detection, it attains state-of-the-art performance in distinguishing NC from Late Mild Cognitive Impairment (LMCI) with 87.16% accuracy and 93.22% AUC. Notably, in the challenging task of differentiating LMCI from AD, FTF-GNN achieves optimal performance (85.30% accuracy, 94.56% AUC), while also delivering competitive results (77.40% accuracy, 91.17% AUC) in distinguishing Early MCI (EMCI) from LMCI-the most clinically complex subtype classification. These results indicate that leveraging complementary frequency- and time-domain information, along with considering asynchronous and synchronous brain-region interactions, can address existing approach limitations, offering a robust neuroimaging-based diagnostic solution.
Page 88 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.