Sort by:
Page 60 of 2202194 results

Automatic Multiclass Tissue Segmentation Using Deep Learning in Brain MR Images of Tumor Patients.

Kandpal A, Kumar P, Gupta RK, Singh A

pubmed logopapersJun 30 2025
Precise delineation of brain tissues, including lesions, in MR images is crucial for data analysis and objectively assessing conditions like neurological disorders and brain tumors. Existing methods for tissue segmentation often fall short in addressing patients with lesions, particularly those with brain tumors. This study aimed to develop and evaluate a robust pipeline utilizing convolutional neural networks for rapid and automatic segmentation of whole brain tissues, including tumor lesions. The proposed pipeline was developed using BraTS'21 data (1251 patients) and tested on local hospital data (100 patients). Ground truth masks for lesions as well as brain tissues were generated. Two convolutional neural networks based on deep residual U-Net framework were trained for segmenting brain tissues and tumor lesions. The performance of the pipeline was evaluated on independent test data using dice similarity coefficient (DSC) and volume similarity (VS). The proposed pipeline achieved a mean DSC of 0.84 and a mean VS of 0.93 on the BraTS'21 test data set. On the local hospital test data set, it attained a mean DSC of 0.78 and a mean VS of 0.91. The proposed pipeline also generated satisfactory masks in cases where the SPM12 software performed inadequately. The proposed pipeline offers a reliable and automatic solution for segmenting brain tissues and tumor lesions in MR images. Its adaptability makes it a valuable tool for both research and clinical applications, potentially streamlining workflows and enhancing the precision of analyses in neurological and oncological studies.

Statistical Toolkit for Analysis of Radiotherapy DICOM Data.

Kinz M, Molodowitch C, Killoran J, Hesser JW, Zygmanski P

pubmed logopapersJun 30 2025

Radiotherapy (RT) has become increasingly sophisticated, necessitating advanced tools for analyzing extensive treatment data in hospital databases. Such analyses can enhance future treatments, particularly through Knowledge-Based Planning, and aid in developing new treatment modalities like convergent kV RT.
Purpose: The objective is to develop automated software tools for large-scale retrospective analysis of over 10,000 MeV x-ray radiotherapy plans. This aims to identify trends and references in plans delivered at our institution across all treatment sites, focusing on: (A) Planning-Target-Volume, Clinical-Target-Volume, Gross-Tumor-Volume, and Organ-At-Risk (PTV/CTV/GTV/OAR) topology, morphology, and dosimetry, and (B) RT plan efficiency and complexity.
Methods:
The software tools are coded in Python. Topological metrics are evaluated using principal component analysis, including center of mass, volume, size, and depth. Morphology is quantified using Hounsfield Units, while dose distribution is characterized by conformity and homogeneity indexes. The total dose within the target versus the body is defined as the Dose Balance Index. 
Results:
The primary outcome of this study is the toolkit and an analysis of our database. For example, the mean minimum and maximum PTV depths are about 2.5±2.3 cm and 9±3 cm, respectively.
Conclusions:
This study provides a statistical basis for RT plans and the necessary tools to generate them. It aids in selecting plans for knowledge-based models and deep-learning networks. The site-specific volume and depth results help identify the limitations and opportunities of current and future treatment modalities, in our case convergent kV RT. The compiled statistics and tools are versatile for training, quality assurance, comparing plans from different periods or institutions, and establishing guidelines. The toolkit is publicly available at https://github.com/m-kinz/STAR.

Efficient Cerebral Infarction Segmentation Using U-Net and U-Net3 + Models.

Yuce E, Sahin ME, Ulutas H, Erkoç MF

pubmed logopapersJun 30 2025
Cerebral infarction remains a leading cause of mortality and long-term disability globally, underscoring the critical importance of early diagnosis and timely intervention to enhance patient outcomes. This study introduces a novel approach to cerebral infarction segmentation using a novel dataset comprising MRI scans of 110 patients, retrospectively collected from Yozgat Bozok University Research Hospital. Two convolutional neural network architectures, the basic U-Net and the advanced U-Net3 + , are employed to segment infarction regions with high precision. Ground-truth annotations are generated under the supervision of an experienced radiologist, and data augmentation techniques are applied to address dataset limitations, resulting in 6732 balanced images for training, validation, and testing. Performance evaluation is conducted using metrics such as the dice score, Intersection over Union (IoU), pixel accuracy, and specificity. The basic U-Net achieved superior performance with a dice score of 0.8947, a mean IoU of 0.8798, a pixel accuracy of 0.9963, and a specificity of 0.9984, outperforming U-Net3 + despite its simpler architecture. U-Net3 + , with its complex structure and advanced features, delivered competitive results, highlighting the potential trade-off between model complexity and performance in medical imaging tasks. This study underscores the significance of leveraging deep learning for precise and efficient segmentation of cerebral infarction. The results demonstrate the capability of CNN-based architectures to support medical decision-making, offering a promising pathway for advancing stroke diagnosis and treatment planning.

Evaluation of Cone-Beam Computed Tomography Images with Artificial Intelligence.

Arı T, Bayrakdar IS, Çelik Ö, Bilgir E, Kuran A, Orhan K

pubmed logopapersJun 30 2025
This study aims to evaluate the success of artificial intelligence models developed using convolutional neural network-based algorithms on CBCT images. Labeling was done by segmentation method for 15 different conditions including caries, restorative filling material, root-canal filling material, dental implant, implant supported crown, crown, pontic, impacted tooth, supernumerary tooth, residual root, osteosclerotic area, periapical lesion, radiolucent jaw lesion, radiopaque jaw lesion, and mixed appearing jaw lesion on the data set consisting of 300 CBCT images. In model development, the Mask R-CNN architecture and ResNet 101 model were used as a transfer learning method. The success metrics of the model were calculated with the confusion matrix method. When the F1 scores of the developed models were evaluated, the most successful dental implant was found to be 1, and the lowest F1 score was found to be a mixed appearing jaw lesion. F1 scores were respectively dental implant, root canal filling material, implant supported crown, restorative filling material, radiopaque jaw lesion, crown, pontic, impacted tooth, caries, residual tooth root, radiolucent jaw lesion, osteosclerotic area, periapical lesion, supernumerary tooth, for mixed appearing jaw lesion; 1 is 0.99, 0.98, 0.98, 0.97, 0.96, 0.96, 0.95, 0.94, 0.94, 0.94, 0.90, 0.90, 0.87, and 0.8. Interpreting CBCT images is a time-consuming process and requires expertise. In the era of digital transformation, artificial intelligence-based systems that can automatically evaluate images and convert them into report format as a decision support mechanism will contribute to reducing the workload of physicians, thus increasing the time allocated to the interpretation of pathologies.

Deep learning for automated, motion-resolved tumor segmentation in radiotherapy.

Sarkar S, Teo PT, Abazeed ME

pubmed logopapersJun 30 2025
Accurate tumor delineation is foundational to radiotherapy. In the era of deep learning, the automation of this labor-intensive and variation-prone process is increasingly tractable. We developed a deep neural network model to segment gross tumor volumes (GTVs) in the lung and propagate them across 4D CT images to generate an internal target volume (ITV), capturing tumor motion during respiration. Using a multicenter cohort-based registry from 9 clinics across 2 health systems, we trained a 3D UNet model (iSeg) on pre-treatment CT images and corresponding GTV masks (n = 739, 5-fold cross-validation) and validated it on two independent cohorts (n = 161; n = 102). The internal cohort achieved a median Dice (DSC) of 0.73 [IQR: 0.62-0.80], with comparable performance in external cohorts (DSC = 0.70 [0.52-0.78] and 0.71 [0.59-79]), indicating multi-site validation. iSeg matched human inter-observer variability and was robust to image quality and tumor motion (DSC = 0.77 [0.68-0.86]). Machine-generated ITVs were significantly smaller than physician delineated contours (p < 0.0001), indicating more precise delineation. Notably, higher false positive voxel rate (regions segmented by the machine but not the human) were associated with increased local failure (HR: 1.01 per voxel, p = 0.03), suggesting the clinical relevance of these discordant regions. These results mark a leap in automated target volume segmentation and suggest that machine delineation can enhance the accuracy, reproducibility, and efficiency of this core task in radiotherapy.

Advancements in Herpes Zoster Diagnosis, Treatment, and Management: Systematic Review of Artificial Intelligence Applications.

Wu D, Liu N, Ma R, Wu P

pubmed logopapersJun 30 2025
The application of artificial intelligence (AI) in medicine has garnered significant attention in recent years, offering new possibilities for improving patient care across various domains. For herpes zoster, a viral infection caused by the reactivation of the varicella-zoster virus, AI technologies have shown remarkable potential in enhancing disease diagnosis, treatment, and management. This study aims to investigate the current research status in the use of AI for herpes zoster, offering a comprehensive synthesis of existing advancements. A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three databases of Web of Science Core Collection, PubMed, and IEEE were searched to identify relevant studies on AI applications in herpes zoster research on November 17, 2023. Inclusion criteria were as follows: (1) research articles, (2) published in English, (3) involving actual AI applications, and (4) focusing on herpes zoster. Exclusion criteria comprised nonresearch articles, non-English papers, and studies only mentioning AI without application. Two independent clinicians screened the studies, with a third senior clinician resolving disagreements. In total, 26 articles were included. Data were extracted on AI task types; algorithms; data sources; data types; and clinical applications in diagnosis, treatment, and management. Trend analysis revealed an increasing annual interest in AI applications for herpes zoster. Hospital-derived data were the primary source (15/26, 57.7%), followed by public databases (6/26, 23.1%) and internet data (5/26, 19.2%). Medical images (9/26, 34.6%) and electronic medical records (7/26, 26.9%) were the most commonly used data types. Classification tasks (85.2%) dominated AI applications, with neural networks, particularly multilayer perceptron and convolutional neural networks being the most frequently used algorithms. AI applications were analyzed across three domains: (1) diagnosis, where mobile deep neural networks, convolutional neural network ensemble models, and mixed-scale attention-based models have improved diagnostic accuracy and efficiency; (2) treatment, where machine learning models, such as deep autoencoders combined with functional magnetic resonance imaging, electroencephalography, and clinical data, have enhanced treatment outcome predictions; and (3) management, where AI has facilitated case identification, epidemiological research, health care burden assessment, and risk factor exploration for postherpetic neuralgia and other complications. Overall, this study provides a comprehensive overview of AI applications in herpes zoster from clinical, data, and algorithmic perspectives, offering valuable insights for future research in this rapidly evolving field. AI has significantly advanced herpes zoster research by enhancing diagnostic accuracy, predicting treatment outcomes, and optimizing disease management. However, several limitations exist, including potential omissions from excluding databases like Embase and Scopus, language bias due to the inclusion of only English publications, and the risk of subjective bias in study selection. Broader studies and continuous updates are needed to fully capture the scope of AI applications in herpes zoster in the future.

Associations of CT Muscle Area and Density With Functional Outcomes and Mortality Across Anatomical Regions in Older Men.

Hetherington-Rauth M, Mansfield TA, Lenchik L, Weaver AA, Cawthon PM

pubmed logopapersJun 30 2025
The automated segmentation of computed tomography (CT) images has made their opportunistic use more feasible, yet, the association of muscle area and density from multiple anatomical regions with functional outcomes and mortality risk in older adults has not been fully explored. We aimed to determine if muscle area and density at the L1 and L3 vertebra and right and left proximal thigh were similarly related to functional outcomes and 10-year mortality risk. Men from the Osteoporotic Fractures in Men (MrOS) study who had CT images, measures of grip strength, 6 m walking speed, and leg power (Nottingham Power Rig) at the baseline visit were included in the analyses (n = 3290, 73.7 ± 5.8 years). CT images were automatically segmented to derive muscle area and muscle density. Deaths were centrally adjudicated over a 10-year follow-up. Linear regression and proportional hazards were used to model relationships of CT muscle metrics with functional outcomes and mortality, respectively, while adjusting for covariates. Muscle area and density were positively related to functional outcomes regardless of anatomical region, with the most variance explained in leg power (adjusted R<sup>2</sup> = 0.40-0.46), followed by grip strength (adjusted R<sup>2</sup> = 0.25-0.29) and walking speed (adjusted R<sup>2</sup> = 0.18-0.20). A one-unit SD increase in muscle area and density was associated with a 5%-13% and 8%-21% decrease in the risk of all-cause mortality, respectively, with the strongest associations observed at the right and left thigh. Automated measures of CT muscle area and density are related to functional outcomes and risk of mortality in older men, regardless of CT anatomical region.

Thin-slice T<sub>2</sub>-weighted images and deep-learning-based super-resolution reconstruction: improved preoperative assessment of vascular invasion for pancreatic ductal adenocarcinoma.

Zhou X, Wu Y, Qin Y, Song C, Wang M, Cai H, Zhao Q, Liu J, Wang J, Dong Z, Luo Y, Peng Z, Feng ST

pubmed logopapersJun 30 2025
To evaluate the efficacy of thin-slice T<sub>2</sub>-weighted imaging (T<sub>2</sub>WI) and super-resolution reconstruction (SRR) for preoperative assessment of vascular invasion in pancreatic ductal adenocarcinoma (PDAC). Ninety-five PDACs with preoperative MRI were retrospectively enrolled as a training set, with non-reconstructed T<sub>2</sub>WI (NRT<sub>2</sub>) in different slice thicknesses (NRT<sub>2</sub>-3, 3 mm; NRT<sub>2</sub>-5, ≥ 5 mm). A prospective test set was collected with NRT<sub>2</sub>-5 (n = 125) only. A deep-learning network was employed to generate reconstructed super-resolution T<sub>2</sub>WI (SRT<sub>2</sub>) in different slice thicknesses (SRT<sub>2</sub>-3, 3 mm; SRT<sub>2</sub>-5, ≥ 5 mm). Image quality was assessed, including the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and signal-intensity ratio (SIR<sub>t/p</sub>, tumor/pancreas; SIR<sub>t/b</sub>, tumor/background). Diagnostic efficacy for vascular invasion was evaluated using the area under the curve (AUC) and compared across different slice thicknesses before and after reconstruction. SRT<sub>2</sub>-5 demonstrated higher SNR and SIR<sub>t/p</sub> compared to NRT<sub>2</sub>-5 (74.18 vs 72.46; 1.42 vs 1.30; p < 0.05). SRT<sub>2</sub>-3 showed increased SIR<sub>t/p</sub> and SIR<sub>t/b</sub> over NRT<sub>2</sub>-3 (1.35 vs 1.31; 2.73 vs 2.58; p < 0.05). SRT<sub>2</sub>-5 showed higher CNR, SIR<sub>t/p</sub> and SIR<sub>t/b</sub> than NRT<sub>2</sub>-3 (p < 0.05). NRT<sub>2</sub>-3 outperformed NRT<sub>2</sub>-5 in evaluating venous invasion (AUC: 0.732 vs 0.597, p = 0.021). SRR improved venous assessment (AUC: NRT<sub>2</sub>-3, 0.927 vs 0.732; NRT<sub>2</sub>-5, 0.823 vs 0.597; p < 0.05), and SRT<sub>2</sub>-5 exhibits comparable efficacy to NRT<sub>2</sub>-3 in venous assessment (AUC: 0.823 vs 0.732, p = 0.162). Thin-slice T<sub>2</sub>WI and SRR effectively improve the image quality and diagnostic efficacy for assessing venous invasion in PDAC. Thick-slice T<sub>2</sub>WI with SRR is a potential alternative to thin-slice T<sub>2</sub>WI. Both thin-slice T<sub>2</sub>-WI and SRR effectively improve image quality and diagnostic performance, providing valuable options for optimizing preoperative vascular assessment in PDAC. Non-invasive and accurate assessment of vascular invasion supports treatment planning and avoids futile surgery. Vascular invasion evaluation is critical for the surgical eligibility of PDAC. SRR improved image quality and vascular assessment in T<sub>2</sub>WI. Utilizing thin-slice T<sub>2</sub>WI and SRR aids in clinical decision making for PDAC.

Limited-angle SPECT image reconstruction using deep image prior.

Hori K, Hashimoto F, Koyama K, Hashimoto T

pubmed logopapersJun 30 2025
[Objective] In single-photon emission computed tomography (SPECT) image reconstruction, limited-angle conditions lead to a loss of frequency components, which distort the reconstructed tomographic image along directions corresponding to the non-collected projection angle range. Although conventional iterative image reconstruction methods have been used to improve the reconstructed images in limited-angle conditions, the image quality is still unsuitable for clinical use. We propose a limited-angle SPECT image reconstruction method that uses an end-to-end deep image prior (DIP) framework to improve reconstructed image quality.&#xD;[Approach] The proposed limited-angle SPECT image reconstruction is an end-to-end DIP framework which incorporates a forward projection model into the loss function to optimise the neural network. By also incorporating a binary mask that indicates whether each data point in the measured projection data has been collected, the proposed method restores the non-collected projection data and reconstructs a less distorted image.&#xD;[Main results] The proposed method was evaluated using 20 numerical phantoms and clinical patient data. In numerical simulations, the proposed method outperformed existing back-projection-based methods in terms of peak signal-to-noise ratio and structural similarity index measure. We analysed the reconstructed tomographic images in the frequency domain using an object-specific modulation transfer function, in simulations and on clinical patient data, to evaluate the response of the reconstruction method to different frequencies of the object. The proposed method significantly improved the response to almost all spatial frequencies, even in the non-collected projection angle range. The results demonstrate that the proposed method reconstructs a less distorted tomographic image.&#xD;[Significance] The proposed end-to-end DIP-based reconstruction method restores lost frequency components and mitigates image distortion under limited-angle conditions by incorporating a binary mask into the loss function.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric&#xD;accuracy of anatomical structures. However, to enhance realism, it is also important&#xD;to incorporate intra-organ detail. Because biological tissues are heterogeneous in&#xD;composition, virtual phantoms should reflect this by including realistic intra-organ&#xD;texture and material variation.&#xD;We propose training two 3D Double U-Net conditional generative adversarial&#xD;networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs&#xD;found within the torso. The model was trained on 378 CT image-segmentation&#xD;pairs taken from a publicly available dataset with 18 additional pairs reserved for&#xD;testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT&#xD;simulation platform.&#xD;Results showed that the deep learning model was able to synthesize realistic&#xD;heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were&#xD;compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06&#xD;HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)&#xD;were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy&#xD;between the generated and actual distribution was 0.0016. These metrics marked&#xD;an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current&#xD;homogeneous texture methods. The generated phantoms that underwent a virtual&#xD;CT scan had a closer visual resemblance to the true CT scan compared to the previous&#xD;method.&#xD;The resulting heterogeneous phantoms offer a significant step toward more realistic&#xD;in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity&#xD;to true anatomical variation.
Page 60 of 2202194 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.