Sort by:
Page 6 of 49482 results

Interpretable MRI Subregional Radiomics-Deep Learning Model for Preoperative Lymphovascular Invasion Prediction in Rectal Cancer: A Dual-Center Study.

Huang T, Zeng Y, Jiang R, Zhou Q, Wu G, Zhong J

pubmed logopapersJul 11 2025
Develop a fusion model based on explainable machine learning, combining multiparametric MRI subregional radiomics and deep learning, to preoperatively predict the lymphovascular invasion status in rectal cancer. We collected data from RC patients with histopathological confirmation from two medical centers, with 301 patients used as a training set and 75 patients as an external validation set. Using K-means clustering techniques, we meticulously divided the tumor areas into multiple subregions and extracted crucial radiomic features from them. Additionally, we employed an advanced Vision Transformer (ViT) deep learning model to extract features. These features were integrated to construct the SubViT model. To better understand the decision-making process of the model, we used the Shapley Additive Properties (SHAP) tool to evaluate the model's interpretability. Finally, we comprehensively assessed the performance of the SubViT model through receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and the Delong test, comparing it with other models. In this study, the SubViT model demonstrated outstanding predictive performance in the training set, achieving an area under the curve (AUC) of 0.934 (95% confidence interval: 0.9074 to 0.9603). It also performed well in the external validation set, with an AUC of 0.884 (95% confidence interval: 0.8055 to 0.9616), outperforming both subregion radiomics and imaging-based models. Furthermore, decision curve analysis (DCA) indicated that the SubViT model provides higher clinical utility compared to other models. As an advanced composite model, the SubViT model demonstrated its efficiency in the non-invasive assessment of local vascular invasion (LVI) in rectal cancer.

Breast lesion classification via colorized mammograms and transfer learning in a novel CAD framework.

Hussein AA, Valizadeh M, Amirani MC, Mirbolouk S

pubmed logopapersJul 11 2025
Medical imaging sciences and diagnostic techniques for Breast Cancer (BC) imaging have advanced tremendously, particularly with the use of mammography images; however, radiologists may still misinterpret medical images of the breast, resulting in limitations and flaws in the screening process. As a result, Computer-Aided Design (CAD) systems have become increasingly popular due to their ability to operate independently of human analysis. Current CAD systems use grayscale analysis, which lacks the contrast needed to differentiate benign from malignant lesions. As part of this study, an innovative CAD system is presented that transforms standard grayscale mammography images into RGB colored through a three-path preprocessing framework developed for noise reduction, lesion highlighting, and tumor-centric intensity adjustment using a data-driven transfer function. In contrast to a generic approach, this approach statistically tailors colorization in order to emphasize malignant regions, thus enhancing the ability of both machines and humans to recognize cancerous areas. As a consequence of this conversion, breast tumors with anomalies become more visible, which allows us to extract more accurate features about them. In a subsequent step, Machine Learning (ML) algorithms are employed to classify these tumors as malign or benign cases. A pre-trained model is developed to extract comprehensive features from colored mammography images by employing this approach. A variety of techniques are implemented in the pre-processing section to minimize noise and improve image perception; however, the most challenging methodology is the application of creative techniques to adjust pixels' intensity values in mammography images using a data-driven transfer function derived from tumor intensity histograms. This adjustment serves to draw attention to tumors while reducing the brightness of other areas in the breast image. Measuring criteria such as accuracy, sensitivity, specificity, precision, F1-Score, and Area Under the Curve (AUC) are used to evaluate the efficacy of the employed methodologies. This work employed and tested a variety of pre-training and ML techniques. However, the combination of EfficientNetB0 pre-training with ML Support Vector Machines (SVM) produced optimal results with accuracy, sensitivity, specificity, precision, F1-Score, and AUC, of 99.4%, 98.7%, 99.1%, 99%, 98.8%, and 100%, respectively. It is clear from these results that the developed method does not only advance the state-of-the-art in technical terms, but also provides radiologists with a practical tool to aid in the reduction of diagnostic errors and increase the detection of early breast cancer.

Performance of Radiomics and Deep Learning Models in Predicting Distant Metastases in Soft Tissue Sarcomas: A Systematic Review and Meta-analysis.

Mirghaderi P, Valizadeh P, Haseli S, Kim HS, Azhideh A, Nyflot MJ, Schaub SK, Chalian M

pubmed logopapersJul 11 2025
Predicting distant metastases in soft tissue sarcomas (STS) is vital for guiding clinical decision-making. Recent advancements in radiomics and deep learning (DL) models have shown promise, but their diagnostic accuracy remains unclear. This meta-analysis aims to assess the performance of radiomics and DL-based models in predicting metastases in STS by analyzing pooled sensitivity and specificity. Following PRISMA guidelines, a thorough search was conducted in PubMed, Web of Science, and Embase. A random-effects model was used to estimate the pooled area under the curve (AUC), sensitivity, and specificity. Subgroup analyses were performed based on imaging modality (MRI, PET, PET/CT), feature extraction method (DL radiomics [DLR] vs. handcrafted radiomics [HCR]), incorporation of clinical features, and dataset used. Heterogeneity by I² statistic, leave-one-out sensitivity analyses, and publication bias by Egger's test assessed model robustness and potential biases. Ninetheen studies involving 1712 patients were included. The pooled AUC for predicting metastasis was 0.88 (95% CI: 0.80-0.92). The pooled AUC values were 88% (95% CI: 77-89%) for MRI-based models, 80% (95% CI: 76-92%) for PET-based models, and 91% (95% CI: 78-93%) for PET/CT-based models, with no significant differences (p = 0.75). DL-based models showed significantly higher sensitivity than HCR models (p < 0.01). Including clinical features did not significantly improve model performance (AUC: 0.90 vs. 0.88, p = 0.99). Significant heterogeneity was noted (I² > 25%), and Egger's test suggested potential publication bias (p < 0.001). Radiomics models showed promising potential for predicting metastases in STSs, with DL approaches outperforming traditional HCR. While integrating this approach into routine clinical practice is still evolving, it can aid physicians in identifying high-risk patients and implementing targeted monitoring strategies to reduce the risk of severe complications associated with metastasis. However, challenges such as heterogeneity, limited external validation, and potential publication bias persist. Future research should concentrate on standardizing imaging protocols and conducting multi-center validation studies to improve the clinical applicability of radiomics predictive models.

Advancing Rare Neurological Disorder Diagnosis: Addressing Challenges with Systematic Reviews and AI-Driven MRI Meta-Trans Learning Framework for NeuroDegenerative Disorders.

Gupta A, Malhotra D

pubmed logopapersJul 11 2025
Neurological Disorders (ND) affect a large portion of the global population, impacting the brain, spinal cord, and nerves. These disorders fall into categories such as NeuroDevelopmental (NDD), NeuroBiological (NBD), and NeuroDegenerative (ND<sub>e</sub>) disorders, which range from common to rare conditions. While Artificial Intelligence (AI) has advanced healthcare diagnostics, training Machine Learning (ML) and Deep Learning (DL) models for early detection of rare neurological disorders remains a challenge due to limited patient data. This data scarcity poses a significant public health issue. Meta_Trans Learning (M<sub>TA</sub>L), which integrates Meta-Learning (M<sub>t</sub>L) and Transfer Learning (TL), offers a promising solution by leveraging small datasets to extract expert patterns, generalize findings, and reduce AI bias in healthcare. This research systematically reviews studies from 2018 to 2024 to explore how ML and M<sub>TA</sub>L techniques are applied in diagnosing NDD, NBD, and ND<sub>e</sub> disorders. It also provides statistical and parametric analysis of ML and DL methods for neurological disorder diagnosis. Lastly, the study introduces a MRI-based ND<sub>e</sub>-M<sub>TA</sub>L framework to aid healthcare professionals in early detection of rare neuro disorders, aiming to enhance diagnostic accuracy and advance healthcare practices.

GH-UNet: group-wise hybrid convolution-VIT for robust medical image segmentation.

Wang S, Li G, Gao M, Zhuo L, Liu M, Ma Z, Zhao W, Fu X

pubmed logopapersJul 10 2025
Medical image segmentation is vital for accurate diagnosis. While U-Net-based models are effective, they struggle to capture long-range dependencies in complex anatomy. We propose GH-UNet, a Group-wise Hybrid Convolution-ViT model within the U-Net framework, to address this limitation. GH-UNet integrates a hybrid convolution-Transformer encoder for both local detail and global context modeling, a Group-wise Dynamic Gating (GDG) module for adaptive feature weighting, and a cascaded decoder for multi-scale integration. Both the encoder and GDG are modular, enabling compatibility with various CNN or ViT backbones. Extensive experiments on five public and one private dataset show GH-UNet consistently achieves superior performance. On ISIC2016, it surpasses H2Former with 1.37% and 1.94% gains in DICE and IOU, respectively, using only 38% of the parameters and 49.61% of the FLOPs. The code is freely accessible via: https://github.com/xiachashuanghua/GH-UNet .

MRI sequence focused on pancreatic morphology evaluation: three-shot turbo spin-echo with deep learning-based reconstruction.

Kadoya Y, Mochizuki K, Asano A, Miyakawa K, Kanatani M, Saito J, Abo H

pubmed logopapersJul 10 2025
BackgroundHigher-resolution magnetic resonance imaging sequences are needed for the early detection of pancreatic cancer.PurposeTo compare the quality of our novel T2-weighted, high-contrast, thin-slice imaging sequence, with an improved spatial resolution and deep learning-based reconstruction (three-shot turbo spin-echo with deep learning-based reconstruction [3S-TSE-DLR]), for imaging the pancreas with imaging using three conventional sequences (half-Fourier acquisition single-shot turbo spin-echo [HASTE], fat-suppressed 3D T1-weighted [FS-3D-T1W] imaging, and magnetic resonance cholangiopancreatography [MRCP]).Material and MethodsPancreatic images of 50 healthy volunteers acquired with 3S-TSE-DLR, HASTE, FS-3D-T1W imaging, and MRCP were compared by two diagnostic radiologists. A 5-point scale was used for assessing motion artifacts, pancreatic margin sharpness, and the ability to identify the main pancreatic duct (MPD) on 3S-TSE-DLR, HASTE, and FS-3D-T1W imaging, respectively. The ability to identify MPD via MRCP was also evaluated.ResultsArtifact scores (the higher the score, the fewer the artifacts) were significantly higher for 3S-TSE-DLR than for HASTE, and significantly lower for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists. Sharpness scores were significantly higher for 3S-TSE-DLR than for HASTE and FS-3D-T1W imaging, for both radiologists. The rate of identification of MPD was significantly higher for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists, and significantly higher for 3S-TSE-DLR than for HASTE for one radiologist. The rate of identification of MPD was not significantly different between 3S-TSE-DLR and MRCP.Conclusion3S-TSE-DLR provides better image sharpness than conventional sequences, can identify MPD equally as well or better than HASTE, and shows identification performance comparable to that of MRCP.

Non-invasive identification of TKI-resistant NSCLC: a multi-model AI approach for predicting EGFR/TP53 co-mutations.

Li J, Xu R, Wang D, Liang Z, Li Y, Wang Q, Bi L, Qi Y, Zhou Y, Li W

pubmed logopapersJul 10 2025
To investigate the value of multi-model based on preoperative CT scans in predicting EGFR/TP53 co-mutation status. We retrospectively included 2171 patients with non-small cell lung cancer (NSCLC) with pre-treatment computed tomography (CT) scans and predicting epidermal growth factor receptor (EGFR) gene sequencing from West China Hospital between January 2013 and April 2024. The deep-learning model was built for predicting EGFR / tumor protein 53 (TP53) co-occurrence status. The model performance was evaluated by area under the curve (AUC) and Kaplan-Meier analysis. We further compared multi-dimension model with three one-dimension models separately, and we explored the value of combining clinical factors with machine-learning factors. Additionally, we investigated 546 patients with 56-panel next-generation sequencing and low-dose computed tomography (LDCT) to explore the biological mechanisms of radiomics. In our cohort of 2171 patients (1,153 males, 1,018 females; median age 60 years), single-dimensional models were developed using data from 1,055 eligible patients. The multi-dimensional model utilizing a Random Forest classifier achieved superior performance, yielding the highest AUC of 0.843 for predicting EGFR/TP53 co-mutations in the test set. The multi-dimensional model demonstrates promising potential for non-invasive prediction of EGFR and TP53 co-mutations, facilitating early and informed clinical decision-making in NSCLC patients at risk of treatment resistance.

Attention-based multimodal deep learning for interpretable and generalizable prediction of pathological complete response in breast cancer.

Nishizawa T, Maldjian T, Jiao Z, Duong TQ

pubmed logopapersJul 10 2025
Accurate prediction of pathological complete response (pCR) to neoadjuvant chemotherapy has significant clinical utility in the management of breast cancer treatment. Although multimodal deep learning models have shown promise for predicting pCR from medical imaging and other clinical data, their adoption has been limited due to challenges with interpretability and generalizability across institutions. We developed a multimodal deep learning model combining post contrast-enhanced whole-breast MRI at pre- and post-treatment timepoints with non-imaging clinical features. The model integrates 3D convolutional neural networks and self-attention to capture spatial and cross-modal interactions. We utilized two public multi-institutional datasets to perform internal and external validation of the model. For model training and validation, we used data from the I-SPY 2 trial (N = 660). For external validation, we used the I-SPY 1 dataset (N = 114). Of the 660 patients in I-SPY 2, 217 patients achieved pCR (32.88%). Of the 114 patients in I-SPY 1, 29 achieved pCR (25.44%). The attention-based multimodal model yielded the best predictive performance with an AUC of 0.73 ± 0.04 on the internal data and an AUC of 0.71 ± 0.02 on the external dataset. The MRI-only model (internal AUC = 0.68 ± 0.03, external AUC = 0.70 ± 0.04) and the non-MRI clinical features-only model (internal AUC = 0.66 ± 0.08, external AUC = 0.71 ± 0.03) trailed in performance, indicating the combination of both modalities is most effective. We present a robust and interpretable deep learning framework for pCR prediction in breast cancer patients undergoing NAC. By combining imaging and clinical data with attention-based fusion, the model achieves strong predictive performance and generalizes across institutions.

FF Swin-Unet: a strategy for automated segmentation and severity scoring of NAFLD.

Fan L, Lei Y, Song F, Sun X, Zhang Z

pubmed logopapersJul 10 2025
Non-alcoholic fatty liver disease (NAFLD) is a significant risk factor for liver cancer and cardiovascular diseases, imposing substantial social and economic burdens. Computed tomography (CT) scans are crucial for diagnosing NAFLD and assessing its severity. However, current manual measurement techniques require considerable human effort and resources from radiologists, and there is a lack of standardized methods for classifying the severity of NAFLD in existing research. To address these challenges, we propose a novel method for NAFLD segmentation and automated severity scoring. The method consists of three key modules: (1) The Semi-automatization nnU-Net Module (SNM) constructs a high-quality dataset by combining manual annotations with semi-automated refinement; (2) The Focal Feature Fusion Swin-Unet Module (FSM) enhances liver and spleen segmentation through multi-scale feature fusion and Swin Transformer-based architectures; (3) The Automated Severity Scoring Module (ASSM) integrates segmentation results with radiological features to classify NAFLD severity. These modules are embedded in a Flask-RESTful API-based system, enabling users to upload abdominal CT data for automated preprocessing, segmentation, and scoring. The Focal Feature Fusion Swin-Unet (FF Swin-Unet) method significantly improves segmentation accuracy, achieving a Dice similarity coefficient (DSC) of 95.64% and a 95th percentile Hausdorff distance (HD95) of 15.94. The accuracy of the automated severity scoring is 90%. With model compression and ONNX deployment, the evaluation speed for each case is approximately 5 seconds. Compared to manual diagnosis, the system can process a large volume of data simultaneously, rapidly, and efficiently while maintaining the same level of diagnostic accuracy, significantly reducing the workload of medical professionals. Our research demonstrates that the proposed system has high accuracy in processing large volumes of CT data and providing automated NAFLD severity scores quickly and efficiently. This method has the potential to significantly reduce the workload of medical professionals and holds immense clinical application potential.

Objective assessment of diagnostic image quality in CT scans: what radiologists and researchers need to know.

Hoeijmakers EJI, Martens B, Wildberger JE, Flohr TG, Jeukens CRLPN

pubmed logopapersJul 10 2025
Quantifying diagnostic image quality (IQ) is not straightforward but essential for optimizing the balance between IQ and radiation dose, and for ensuring consistent high-quality images in CT imaging. This review provides a comprehensive overview of advanced objective reference-free IQ assessment methods for CT scans, beyond standard approaches. A literature search was performed in PubMed and Web of Science up to June 2024 to identify studies using advanced objective image quality methods on clinical CT scans. Only reference-free methods, which do not require a predefined reference image, were included. Traditional methods relying on the standard deviation of the Hounsfield units, the signal-to-noise ratio or contrast-to-noise ratio, all within a manually selected region-of-interest, were excluded. Eligible results were categorized by IQ metric (i.e., noise, contrast, spatial resolution and other) and assessment method (manual, automated, and artificial intelligence (AI)-based). Thirty-five studies were included that proposed or employed reference-free IQ methods, identifying 12 noise assessment methods, 4 contrast assessment methods, 14 spatial resolution assessment methods and 7 others, based on manual, automated or AI-based approaches. This review emphasizes the transition from manual to fully automated approaches for IQ assessment, including the potential of AI-based methods, and it provides a reference tool for researchers and radiologists who need to make a well-considered choice in how to evaluate IQ in CT imaging. This review examines the challenge of quantifying diagnostic CT image quality, essential for optimization studies and ensuring consistent high-quality images, by providing an overview of objective reference-free diagnostic image quality assessment methods beyond standard methods. Quantifying diagnostic CT image quality remains a key challenge. This review summarizes objective diagnostic image quality assessment techniques beyond standard metrics. A decision tree is provided to help select optimal image quality assessment techniques.
Page 6 of 49482 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.