Sort by:
Page 96 of 1411408 results

CT-free attenuation and Monte-Carlo based scatter correction-guided quantitative <sup>90</sup>Y-SPECT imaging for improved dose calculation using deep learning.

Mansouri Z, Salimi Y, Wolf NB, Mainta I, Zaidi H

pubmed logopapersJul 1 2025
This work aimed to develop deep learning (DL) models for CT-free attenuation and Monte Carlo-based scatter correction (AC, SC) in quantitative <sup>90</sup>Y SPECT imaging for improved dose calculation. Data of 190 patients who underwent <sup>90</sup>Y selective internal radiation therapy (SIRT) with glass microspheres was studied. Voxel-level dosimetry was performed on uncorrected and corrected SPECT images using the local energy deposition method. Three deep learning models were trained individually for AC, SC, and joint ASC using a modified 3D shifted-window UNet Transformer (Swin UNETR) architecture. Corrected and unorrected dose maps served as reference and as inputs, respectively. The data was split into train set (~ 80%) and unseen test set (~ 20%). Training was conducted in a five-fold cross-validation scheme. The trained models were tested on the unseen test set. The model's performance was thoroughly evaluated by comparing organ- and voxel-level dosimetry results between the reference and DL-generated dose maps on the unseen test dataset. The voxel and organ-level evaluations also included Gamma analysis with three different distances to agreement (DTA (mm)) and dose difference (DD (%)) criteria to explore suitable criteria in SIRT dosimetry using SPECT. The average ± SD of the voxel-level quantitative metrics for AC task, are mean error (ME (Gy)): -0.026 ± 0.06, structural similarity index (SSIM (%)): 99.5 ± 0.25, and peak signal to noise ratio (PSNR (dB)): 47.28 ± 3.31. These values for SC task are - 0.014 ± 0.05, 99.88 ± 0.099, 55.9 ± 4, respectively. For ASC task, these values are as follows: -0.04 ± 0.06, 99.57 ± 0.33, 47.97 ± 3.6, respectively. The results of voxel level gamma evaluations with three different criteria, namely "DTA: 4.79, DD: 1%", "DTA:10 mm, DD: 5%", and "DTA: 15 mm, DD:10%" were around 98%. The mean absolute error (MAE (Gy)) for tumor and whole normal liver across tasks are as follows: 7.22 ± 5.9 and 1.09 ± 0.86 for AC, 8 ± 9.3 and 0.9 ± 0.8 for SC, and 11.8 ± 12.02 and 1.3 ± 0.98 for ASC, respectively. We developed multiple models for three different clinically scenarios, namely AC, SC, and ASC using the patient-specific Monte Carlo scatter corrected and CT-based attenuation corrected images. These task-specific models could be beneficial to perform the essential corrections where the CT images are either not available or not reliable due to misalignment, after training with a larger dataset.

Dual-threshold sample selection with latent tendency difference for label-noise-robust pneumoconiosis staging.

Zhang S, Ren X, Qiang Y, Zhao J, Qiao Y, Yue H

pubmed logopapersJul 1 2025
BackgroundThe precise pneumoconiosis staging suffers from progressive pair label noise (PPLN) in chest X-ray datasets, because adjacent stages are confused due to unidentifialble and diffuse opacities in the lung fields. As deep neural networks are employed to aid the disease staging, the performance is degraded under such label noise.ObjectiveThis study improves the effectiveness of pneumoconiosis staging by mitigating the impact of PPLN through network architecture refinement and sample selection mechanism adjustment.MethodsWe propose a novel multi-branch architecture that incorporates the dual-threshold sample selection. Several auxiliary branches are integrated in a two-phase module to learn and predict the <i>progressive feature tendency</i>. A novel difference-based metric is introduced to iteratively obtained the instance-specific thresholds as a complementary criterion of dynamic sample selection. All the samples are finally partitioned into <i>clean</i> and <i>hard</i> sets according to dual-threshold criteria and treated differently by loss functions with penalty terms.ResultsCompared with the state-of-the-art, the proposed method obtains the best metrics (accuracy: 90.92%, precision: 84.25%, sensitivity: 81.11%, F1-score: 82.06%, and AUC: 94.64%) under real-world PPLN, and is less sensitive to the rise of synthetic PPLN rate. An ablation study validates the respective contributions of critical modules and demonstrates how variations of essential hyperparameters affect model performance.ConclusionsThe proposed method achieves substantial effectiveness and robustness against PPLN in pneumoconiosis dataset, and can further assist physicians in diagnosing the disease with a higher accuracy and confidence.

Ultrasound-based classification of follicular thyroid Cancer using deep convolutional neural networks with transfer learning.

Agyekum EA, Yuzhi Z, Fang Y, Agyekum DN, Wang X, Issaka E, Li C, Shen X, Qian X, Wu X

pubmed logopapersJul 1 2025
This study aimed to develop and validate convolutional neural network (CNN) models for distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, this current study compared the performance of CNN models with the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) ultrasound-based malignancy risk stratification systems. A total of 327 eligible patients with FTC and FTA who underwent preoperative thyroid ultrasound examination were retrospectively enrolled between August 2017, and August 2024. Patients were randomly assigned to a training cohort (n = 263) and a test cohort (n = 64) in an 8:2 ratio using stratified sampling. Five CNN models, including VGG16, ResNet101, MobileNetV2, ResNet152, and ResNet50, pre-trained with ImageNet, were developed and tested to distinguish FTC from FTA. The CNN models exhibited good performance, yielding areas under the receiver operating characteristic curve (AUC) ranging from 0.64 to 0.77. The ResNet152 model demonstrated the highest AUC (0.77; 95% CI, 0.67-0.87) for distinguishing between FTC and FTA. Decision curve and calibration curve analyses demonstrated the models' favorable clinical value and calibration. Furthermore, when comparing the performance of the developed models with that of the C-TIRADS and ACR-TIRADS systems, the models developed in this study demonstrated superior performance. This can potentially guide appropriate management of FTC in patients with follicular neoplasms.

Developments in MRI radiomics research for vascular cognitive impairment.

Chen X, Luo X, Chen L, Liu H, Yin X, Chen Z

pubmed logopapersJul 1 2025
Vascular cognitive impairment (VCI) is an umbrella term for diseases associated with cognitive decline induced by substantive brain damage following pathological changes in the cerebrovascular system. The primary clinical manifestations include behavioral abnormalities and diminished learning and memory cognitive functions. If the location and extent of brain injury are not identified early and therapeutic interventions are not promptly administered, it may lead to irreversible cognitive impairment. Therefore, the early diagnosis of VCI is crucial for its prevention and treatment. Prior to the onset of cognitive impairment in VCI, magnetic resonance imaging (MRI) radiomics can be utilized for early assessment and diagnosis, thereby guiding clinicians in providing precise treatment for patients, which holds significant potential for development. This article reviews the classification of VCI, the concept of radiomics, the application of MRI radiomics in VCI, and the limitations of radiomics in the context of advancements in its application within the central nervous system. CRITICAL RELEVANCE STATEMENT: This article explores how MRI radiomics can be used to detect VCI early, enhancing clinical radiology practice by offering a reliable method for prediction, diagnosis, and identification, which also promotes standardization in research and integration of disciplines. KEY POINTS: MRI radiomics can predict VCI early. MRI radiomics can diagnose VCI. MRI radiomics distinguishes VCI from Alzheimer's disease.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer.

Thariat J, Mesbah Z, Chahir Y, Beddok A, Blache A, Bourhis J, Fatallah A, Hatt M, Modzelewski R

pubmed logopapersJul 1 2025
Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D "Segment Anything Model" (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.

Differential dementia detection from multimodal brain images in a real-world dataset.

Leming M, Im H

pubmed logopapersJul 1 2025
Artificial intelligence (AI) models have been applied to differential dementia detection tasks in brain images from curated, high-quality benchmark databases, but not real-world data in hospitals. We describe a deep learning model specially trained for disease detection in heterogeneous clinical images from electronic health records without focusing on confounding factors. It encodes up to 14 multimodal images, alongside age and demographics, and outputs the likelihood of vascular dementia, Alzheimer's, Lewy body dementia, Pick's disease, mild cognitive impairment, and unspecified dementia. We use data from Massachusetts General Hospital (183,018 images from 11,015 patients) for training and external data (125,493 images from 6,662 patients) for testing. Performance ranged between 0.82 and 0.94 area under the curve (AUC) on data from 1003 sites. Analysis shows that the model focused on subcortical brain structures as the basis for its decisions. By detecting biomarkers in real-world data, the presented techniques will help with clinical translation of disease detection AI. Our artificial intelligence (AI) model can detect neurodegenerative disorders in brain imaging electronic health record (EHR) data. It encodes up to 14 brain images and text information from a single patient's EHR. Attention maps show that the model focuses on subcortical brain structures. Performance ranged from 0.82 to 0.94 area under the curve (AUC) on data from 1003 external sites.

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.

2.5D deep learning radiomics and clinical data for predicting occult lymph node metastasis in lung adenocarcinoma.

Huang X, Huang X, Wang K, Bai H, Lu X, Jin G

pubmed logopapersJul 1 2025
Occult lymph node metastasis (OLNM) refers to lymph node involvement that remains undetectable by conventional imaging techniques, posing a significant challenge in the accurate staging of lung adenocarcinoma. This study aims to investigate the potential of combining 2.5D deep learning radiomics with clinical data to predict OLNM in lung adenocarcinoma. Retrospective contrast-enhanced CT images were collected from 1,099 patients diagnosed with lung adenocarcinoma across two centers. Multivariable analysis was performed to identify independent clinical risk factors for constructing clinical signatures. Radiomics features were extracted from the enhanced CT images to develop radiomics signatures. A 2.5D deep learning approach was used to extract deep learning features from the images, which were then aggregated using multi-instance learning (MIL) to construct MIL signatures. Deep learning radiomics (DLRad) signatures were developed by integrating the deep learning features with radiomic features. These were subsequently combined with clinical features to form the combined signatures. The performance of the resulting signatures was evaluated using the area under the curve (AUC). The clinical model achieved AUCs of 0.903, 0.866, and 0.785 in the training, validation, and external test cohorts The radiomics model yielded AUCs of 0.865, 0.892, and 0.796 in the training, validation, and external test cohorts. The MIL model demonstrated AUCs of 0.903, 0.900, and 0.852 in the training, validation, and external test cohorts, respectively. The DLRad model showed AUCs of 0.910, 0.908, and 0.875 in the training, validation, and external test cohorts. Notably, the combined model consistently outperformed all other models, achieving AUCs of 0.940, 0.923, and 0.898 in the training, validation, and external test cohorts. The integration of 2.5D deep learning radiomics with clinical data demonstrates strong capability for OLNM in lung adenocarcinoma, potentially aiding clinicians in developing more personalized treatment strategies.

Deep Learning and Radiomics Discrimination of Coronary Chronic Total Occlusion and Subtotal Occlusion using CTA.

Zhou Z, Bo K, Gao Y, Zhang W, Zhang H, Chen Y, Chen Y, Wang H, Zhang N, Huang Y, Mao X, Gao Z, Zhang H, Xu L

pubmed logopapersJul 1 2025
Coronary chronic total occlusion (CTO) and subtotal occlusion (STO) pose diagnostic challenges, differing in treatment strategies. Artificial intelligence and radiomics are promising tools for accurate discrimination. This study aimed to develop deep learning (DL) and radiomics models using coronary computed tomography angiography (CCTA) to differentiate CTO from STO lesions and compare their performance with that of the conventional method. CTO and STO were identified retrospectively from a tertiary hospital and served as training and validation sets for developing and validating the DL and radiomics models to distinguish CTO from STO. An external test cohort was recruited from two additional tertiary hospitals with identical eligibility criteria. All participants underwent CCTA within 1 month before invasive coronary angiography. A total of 581 participants (mean age, 50 years ± 11 [SD]; 474 [81.6%] men) with 600 lesions were enrolled, including 403 CTO and 197 STO lesions. The DL and radiomics models exhibited better discrimination performance than the conventional method, with areas under the curve of 0.908 and 0.860, respectively, vs. 0.794 in the validation set (all p<0.05), and 0.893 and 0.827, respectively, vs. 0.746 in the external test set (all p<0.05). The proposed CCTA-based DL and radiomics models achieved efficient and accurate discrimination of coronary CTO and STO.
Page 96 of 1411408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.