Sort by:
Page 5 of 1151148 results

MRI sequence focused on pancreatic morphology evaluation: three-shot turbo spin-echo with deep learning-based reconstruction.

Kadoya Y, Mochizuki K, Asano A, Miyakawa K, Kanatani M, Saito J, Abo H

pubmed logopapersJul 10 2025
BackgroundHigher-resolution magnetic resonance imaging sequences are needed for the early detection of pancreatic cancer.PurposeTo compare the quality of our novel T2-weighted, high-contrast, thin-slice imaging sequence, with an improved spatial resolution and deep learning-based reconstruction (three-shot turbo spin-echo with deep learning-based reconstruction [3S-TSE-DLR]), for imaging the pancreas with imaging using three conventional sequences (half-Fourier acquisition single-shot turbo spin-echo [HASTE], fat-suppressed 3D T1-weighted [FS-3D-T1W] imaging, and magnetic resonance cholangiopancreatography [MRCP]).Material and MethodsPancreatic images of 50 healthy volunteers acquired with 3S-TSE-DLR, HASTE, FS-3D-T1W imaging, and MRCP were compared by two diagnostic radiologists. A 5-point scale was used for assessing motion artifacts, pancreatic margin sharpness, and the ability to identify the main pancreatic duct (MPD) on 3S-TSE-DLR, HASTE, and FS-3D-T1W imaging, respectively. The ability to identify MPD via MRCP was also evaluated.ResultsArtifact scores (the higher the score, the fewer the artifacts) were significantly higher for 3S-TSE-DLR than for HASTE, and significantly lower for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists. Sharpness scores were significantly higher for 3S-TSE-DLR than for HASTE and FS-3D-T1W imaging, for both radiologists. The rate of identification of MPD was significantly higher for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists, and significantly higher for 3S-TSE-DLR than for HASTE for one radiologist. The rate of identification of MPD was not significantly different between 3S-TSE-DLR and MRCP.Conclusion3S-TSE-DLR provides better image sharpness than conventional sequences, can identify MPD equally as well or better than HASTE, and shows identification performance comparable to that of MRCP.

Non-invasive identification of TKI-resistant NSCLC: a multi-model AI approach for predicting EGFR/TP53 co-mutations.

Li J, Xu R, Wang D, Liang Z, Li Y, Wang Q, Bi L, Qi Y, Zhou Y, Li W

pubmed logopapersJul 10 2025
To investigate the value of multi-model based on preoperative CT scans in predicting EGFR/TP53 co-mutation status. We retrospectively included 2171 patients with non-small cell lung cancer (NSCLC) with pre-treatment computed tomography (CT) scans and predicting epidermal growth factor receptor (EGFR) gene sequencing from West China Hospital between January 2013 and April 2024. The deep-learning model was built for predicting EGFR / tumor protein 53 (TP53) co-occurrence status. The model performance was evaluated by area under the curve (AUC) and Kaplan-Meier analysis. We further compared multi-dimension model with three one-dimension models separately, and we explored the value of combining clinical factors with machine-learning factors. Additionally, we investigated 546 patients with 56-panel next-generation sequencing and low-dose computed tomography (LDCT) to explore the biological mechanisms of radiomics. In our cohort of 2171 patients (1,153 males, 1,018 females; median age 60 years), single-dimensional models were developed using data from 1,055 eligible patients. The multi-dimensional model utilizing a Random Forest classifier achieved superior performance, yielding the highest AUC of 0.843 for predicting EGFR/TP53 co-mutations in the test set. The multi-dimensional model demonstrates promising potential for non-invasive prediction of EGFR and TP53 co-mutations, facilitating early and informed clinical decision-making in NSCLC patients at risk of treatment resistance.

Attention-based multimodal deep learning for interpretable and generalizable prediction of pathological complete response in breast cancer.

Nishizawa T, Maldjian T, Jiao Z, Duong TQ

pubmed logopapersJul 10 2025
Accurate prediction of pathological complete response (pCR) to neoadjuvant chemotherapy has significant clinical utility in the management of breast cancer treatment. Although multimodal deep learning models have shown promise for predicting pCR from medical imaging and other clinical data, their adoption has been limited due to challenges with interpretability and generalizability across institutions. We developed a multimodal deep learning model combining post contrast-enhanced whole-breast MRI at pre- and post-treatment timepoints with non-imaging clinical features. The model integrates 3D convolutional neural networks and self-attention to capture spatial and cross-modal interactions. We utilized two public multi-institutional datasets to perform internal and external validation of the model. For model training and validation, we used data from the I-SPY 2 trial (N = 660). For external validation, we used the I-SPY 1 dataset (N = 114). Of the 660 patients in I-SPY 2, 217 patients achieved pCR (32.88%). Of the 114 patients in I-SPY 1, 29 achieved pCR (25.44%). The attention-based multimodal model yielded the best predictive performance with an AUC of 0.73 ± 0.04 on the internal data and an AUC of 0.71 ± 0.02 on the external dataset. The MRI-only model (internal AUC = 0.68 ± 0.03, external AUC = 0.70 ± 0.04) and the non-MRI clinical features-only model (internal AUC = 0.66 ± 0.08, external AUC = 0.71 ± 0.03) trailed in performance, indicating the combination of both modalities is most effective. We present a robust and interpretable deep learning framework for pCR prediction in breast cancer patients undergoing NAC. By combining imaging and clinical data with attention-based fusion, the model achieves strong predictive performance and generalizes across institutions.

A two-stage dual-task learning strategy for early prediction of pathological complete response to neoadjuvant chemotherapy for breast cancer using dynamic contrast-enhanced magnetic resonance images.

Jing B, Wang J

pubmed logopapersJul 10 2025
Early prediction of treatment response can facilitate personalized treatment for breast cancer patients. Studies on the I-SPY 2 clinical trial demonstrate that multi-time point dynamic contrast-enhanced magnetic resonance (DCEMR) imaging improves the accuracy of predicting pathological complete response (pCR) to chemotherapy. However, previous image-based prediction models usually rely on mid- or post-treatment images to ensure the accuracy of prediction, which may outweigh the benefit of response-based adaptive treatment strategy. Accurately predicting the pCR at the early time point is desired yet remains challenging. To improve prediction accuracy at the early time point of treatment, we proposed a two-stage dual-task learning strategy to train a deep neural network for early prediction using only early-treatment data. We developed and evaluated our proposed method using the I-SPY 2 dataset, which included DCEMR images acquired at three time points: pretreatment (T0), after 3 weeks (T1) and 12 weeks of treatment (T2). At the first stage, we trained a convolutional long short-term memory (LSTM) model using all the data to predict pCR and extract the latent space image representation at T2. At the second stage, we trained a dual-task model to simultaneously predict pCR and the image representation at T2 using images from T0 and T1. This allowed us to predict pCR earlier without using images from T2. By using the conventional single-stage single-task strategy, the area under the receiver operating characteristic curve (AUROC) was 0.799. By using the proposed two-stage dual-task learning strategy, the AUROC was improved to 0.820. Our proposed two-stage dual-task learning strategy can improve model performance significantly (p=0.0025) for predicting pCR at the early time point (3rd week) of neoadjuvant chemotherapy for high-risk breast cancer patients. The early prediction model can potentially help physicians to intervene early and develop personalized plans at the early stage of chemotherapy.

Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models.

Liu L, Wang T, Zhu W, Zhang H, Tian H, Li Y, Cai W, Yang P

pubmed logopapersJul 10 2025
As increased nuchal translucency (NT) thickness is notably associated with fetal chromosomal abnormalities, structural defects, and genetic syndromes, accurate measurement of NT thickness is crucial for the screening of fetal abnormalities during the first trimester. We aimed to develop a model for quality assessment of ultrasound images for precise measurement of fetal NT thickness. We collected 2140 ultrasound images of midsagittal sections of the fetal face between 11 and 14 weeks of gestation. Several image segmentation models were trained, and the one exhibiting the highest DSC and HD 95 was chosen to automatically segment the ROI. The radiomics features and deep transfer learning (DTL) features were extracted and selected to construct radiomics and DTL models. Feature screening was conducted using the <i>t</i>-test, Mann-Whitney <i>U</i>-test, Spearman’s rank correlation analysis, and LASSO. We also developed early fusion and late fusion models to integrate the advantages of radiomics and DTL models. The optimal model was compared with junior radiologists. We used SHapley Additive exPlanations (SHAP) to investigate the model’s interpretability. The DeepLabV3 ResNet achieved the best segmentation performance (DSC: 98.07 ± 0.02%, HD 95: 0.75 ± 0.15 mm). The feature fusion model demonstrated the optimal performance (AUC: 0.978, 95% CI: 0.965–0.990, accuracy: 93.2%, sensitivity: 93.1%, specificity: 93.4%, PPV: 93.5%, NPV: 93.0%, precision: 93.5%). This model exhibited more reliable performance compared to junior radiologists and significantly improved the capabilities of junior radiologists. The SHAP summary plot showed DTL features were the most important features for feature fusion model. The proposed models innovatively bridge the gaps in previous studies, achieving intelligent quality assessment of ultrasound images for NT measurement and highly accurate automatic segmentation of ROIs. These models are potential tools to enhance quality control for fetal ultrasound examinations, streamline clinical workflows, and improve the professional skills of less-experienced radiologists. The online version contains supplementary material available at 10.1186/s12884-025-07863-y.

Predicting Thoracolumbar Vertebral Osteoporotic Fractures: Value Assessment of Chest CT-Based Machine Learning.

Chen Y, Che M, Yang H, Yu M, Yang Z, Qin J

pubmed logopapersJul 10 2025
To assess the value of a chest CT-based machine learning model in predicting osteoporotic vertebral fractures (OVFs) of the thoracolumbar vertebral bodies. We monitored 8910 patients aged ≥50 who underwent chest CT (2021-2024), identifying 54 incident OVFs cases. Using propensity score matching, 108 controls were selected. The 162 patients were randomly assigned to training (n=113) and testing (n=49) cohorts. Clinical models were developed through logistic regression. Radiomics features were extracted from the thoracolumbar vertebral bodies (T11-L2), with top 10 features selected via minimum-redundancy maximum-relevancy and the least absolute shrinkage and selection operator to construct a Radscore model. Nomogram model was established combining clinical and radiomics features, evaluated using receiver operating characteristic curves, decision curve analysis (DCA) and calibration plots. Volumetric bone mineral density (vBMD) (OR=0.95, 95%CI=0.93-0.97) and hemoglobin (HGB) (OR=0.96, 95%CI=0.94-0.98) were selected as independent risk factors for clinical model. From 2288 radiomics features, 10 were selected for Radscore calculation. The Nomogram model (Radscore + vBMD + HGB) achieved area under the curve (AUC) of 0.938/0.906 in training/testing cohorts, outperforming both Radscore (AUC=0.902/0.871) and clinical (AUC=0.802/0.820) models. DCA and calibration plots confirmed the Nomogram model's superior prediction capability. Nomogram model combined with radiomics and clinical features has high predictive performance, and its predictive results for thoracolumbar OVFs can provide reference for clinical decision making.

Hierarchical deep learning system for orbital fracture detection and trap-door classification on CT images.

Oku H, Nakamura Y, Kanematsu Y, Akagi A, Kinoshita S, Sotozono C, Koizumi N, Watanabe A, Okumura N

pubmed logopapersJul 10 2025
To develop and evaluate a hierarchical deep learning system that detects orbital fractures on computed tomography (CT) images and classifies them as depressed or trap-door types. A retrospective diagnostic accuracy study analyzing CT images from patients with confirmed orbital fractures. We collected CT images from 686 patients with orbital fractures treated at a single institution (2010-2025), resulting in 46,013 orbital CT slices. After preprocessing, 7809 slices were selected as regions of interest and partitioned into training (6508 slices) and test (1301 slices) datasets. Our hierarchical approach consisted of a first-stage classifier (YOLOv8) for fracture detection and a second-stage classifier (Vision Transformer) for distinguishing depressed from trap-door fractures. Performance was evaluated at both slice and patient levels, focusing on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC) at both slice and patient levels. For fracture detection, YOLOv8 achieved a slice-level sensitivity of 80.4 % and specificity of 79.2 %, with patient-level performance improving to 94.7 % sensitivity and 90.0 % specificity. For fracture classification, Vision Transformer demonstrated a slice-level sensitivity of 91.5 % and specificity of 83.5 % for trap-door and depressed fractures, with patient-level metrics of 100 % sensitivity and 88.9 % specificity. The complete system correctly identified 18/20 no-fracture cases, 35/40 depressed fracture cases, and 15/17 trap-door fracture cases. Our hierarchical deep learning system effectively detects orbital fractures and distinguishes between depressed and trap-door types with high accuracy. This approach could aid in the timely identification of trap-door fractures requiring urgent surgical intervention, particularly in settings lacking specialized expertise.

Population-scale cross-sectional observational study for AI-powered TB screening on one million CXRs.

Munjal P, Mahrooqi AA, Rajan R, Jeremijenko A, Ahmad I, Akhtar MI, Pimentel MAF, Khan S

pubmed logopapersJul 9 2025
Traditional tuberculosis (TB) screening involves radiologists manually reviewing chest X-rays (CXR), which is time-consuming, error-prone, and limited by workforce shortages. Our AI model, AIRIS-TB (AI Radiology In Screening TB), aims to address these challenges by automating the reporting of all X-rays without any findings. AIRIS-TB was evaluated on over one million CXRs, achieving an AUC of 98.51% and overall false negative rate (FNR) of 1.57%, outperforming radiologists (1.85%) while maintaining a 0% TB-FNR. By selectively deferring only cases with findings to radiologists, the model has the potential to automate up to 80% of routine CXR reporting. Subgroup analysis revealed insignificant performance disparities across age, sex, HIV status, and region of origin, with sputum tests for suspected TB showing a strong correlation with model predictions. This large-scale validation demonstrates AIRIS-TB's safety and efficiency in high-volume TB screening programs, reducing radiologist workload without compromising diagnostic accuracy.

Development of a deep learning-based MRI diagnostic model for human Brucella spondylitis.

Wang B, Wei J, Wang Z, Niu P, Yang L, Hu Y, Shao D, Zhao W

pubmed logopapersJul 9 2025
Brucella spondylitis (BS) and tuberculous spondylitis (TS) are prevalent spinal infections with distinct treatment protocols. Rapid and accurate differentiation between these two conditions is crucial for effective clinical management; however, current imaging and pathogen-based diagnostic methods fall short of fully meeting clinical requirements. This study explores the feasibility of employing deep learning (DL) models based on conventional magnetic resonance imaging (MRI) to differentiate BS and TS. A total of 310 subjects were enrolled in our hospital, comprising 209 with BS, 101 with TS. The participants were randomly divided into a training set (n = 217) and a test set (n = 93). And 74 with other hospital was external validation set. Integrating Convolutional Block Attention Module (CBAM) into the ResNeXt-50 architecture and training the model using sagittal T2-weighted images (T2WI). Classification performance was evaluated using the area under the receiver operating characteristic (AUC) curve, and diagnostic accuracy was compared against general models such as ResNet50, GoogleNet, EfficientNetV2, and VGG16. The CBAM-ResNeXt model revealed superior performance, with accuracy, precision, recall, F1-score, and AUC from 0.942, 0.940, 0.928, 0.934, 0.953, respectively. These metrics outperformed those of the general models. The proposed model offers promising potential for the diagnosis of BS and TS using conventional MRI. It could serve as an invaluable tool in clinical practice, providing a reliable reference for distinguishing between these two diseases.

Feasibility study of "double-low" scanning protocol combined with artificial intelligence iterative reconstruction algorithm for abdominal computed tomography enhancement in patients with obesity.

Ji MT, Wang RR, Wang Q, Li HS, Zhao YX

pubmed logopapersJul 9 2025
To evaluate the efficacy of the "double-low" scanning protocol combined with the artificial intelligence iterative reconstruction (AIIR) algorithm for abdominal computed tomography (CT) enhancement in obese patients and to identify the optimal AIIR algorithm level. Patients with a body mass index ≥ 30.00 kg/m<sup>2</sup> who underwent abdominal CT enhancement were randomly assigned to groups A or B. Group A underwent conventional protocol with the Karl 3D iterative reconstruction algorithm at levels 3-5. Group B underwent the "double-low" protocol with AIIR algorithm at levels 1-5. Radiation dose, total iodine intake, along with subjective and objective image quality were recorded. The optimal reconstruction levels for arterial-phase and portal-venous-phase images were identified. Comparisons were made in terms of radiation dose, iodine intake, and image quality. Overall, 150 patients with obesity were collected, and each group consisted of 75 cases. Karl 3D level 5 was the optimal algorithm level for group A, while AIIR level 4 was the optimal algorithm level for group B. AIIR level 4 images in group B exhibited significantly superior subjective and objective image quality than those in Karl 3D level 5 images in group A (P < 0.001). Group B showed reductions in mean CT dose index values, dose-length product, size-specific dose estimate based on water-equivalent diameter, and total iodine intake, compared with group A (P < 0.001). The "double-low" scanning protocol combined with the AIIR algorithm significantly reduces radiation dose and iodine intake during abdominal CT enhancement in obese patients. AIIR level 4 is the optimal reconstruction level for arterial-phase and portal-venous-phase in this patient population.
Page 5 of 1151148 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.