Sort by:
Page 179 of 3383378 results

Associations of CT Muscle Area and Density With Functional Outcomes and Mortality Across Anatomical Regions in Older Men.

Hetherington-Rauth M, Mansfield TA, Lenchik L, Weaver AA, Cawthon PM

pubmed logopapersJun 30 2025
The automated segmentation of computed tomography (CT) images has made their opportunistic use more feasible, yet, the association of muscle area and density from multiple anatomical regions with functional outcomes and mortality risk in older adults has not been fully explored. We aimed to determine if muscle area and density at the L1 and L3 vertebra and right and left proximal thigh were similarly related to functional outcomes and 10-year mortality risk. Men from the Osteoporotic Fractures in Men (MrOS) study who had CT images, measures of grip strength, 6 m walking speed, and leg power (Nottingham Power Rig) at the baseline visit were included in the analyses (n = 3290, 73.7 ± 5.8 years). CT images were automatically segmented to derive muscle area and muscle density. Deaths were centrally adjudicated over a 10-year follow-up. Linear regression and proportional hazards were used to model relationships of CT muscle metrics with functional outcomes and mortality, respectively, while adjusting for covariates. Muscle area and density were positively related to functional outcomes regardless of anatomical region, with the most variance explained in leg power (adjusted R<sup>2</sup> = 0.40-0.46), followed by grip strength (adjusted R<sup>2</sup> = 0.25-0.29) and walking speed (adjusted R<sup>2</sup> = 0.18-0.20). A one-unit SD increase in muscle area and density was associated with a 5%-13% and 8%-21% decrease in the risk of all-cause mortality, respectively, with the strongest associations observed at the right and left thigh. Automated measures of CT muscle area and density are related to functional outcomes and risk of mortality in older men, regardless of CT anatomical region.

Thin-slice T<sub>2</sub>-weighted images and deep-learning-based super-resolution reconstruction: improved preoperative assessment of vascular invasion for pancreatic ductal adenocarcinoma.

Zhou X, Wu Y, Qin Y, Song C, Wang M, Cai H, Zhao Q, Liu J, Wang J, Dong Z, Luo Y, Peng Z, Feng ST

pubmed logopapersJun 30 2025
To evaluate the efficacy of thin-slice T<sub>2</sub>-weighted imaging (T<sub>2</sub>WI) and super-resolution reconstruction (SRR) for preoperative assessment of vascular invasion in pancreatic ductal adenocarcinoma (PDAC). Ninety-five PDACs with preoperative MRI were retrospectively enrolled as a training set, with non-reconstructed T<sub>2</sub>WI (NRT<sub>2</sub>) in different slice thicknesses (NRT<sub>2</sub>-3, 3 mm; NRT<sub>2</sub>-5, ≥ 5 mm). A prospective test set was collected with NRT<sub>2</sub>-5 (n = 125) only. A deep-learning network was employed to generate reconstructed super-resolution T<sub>2</sub>WI (SRT<sub>2</sub>) in different slice thicknesses (SRT<sub>2</sub>-3, 3 mm; SRT<sub>2</sub>-5, ≥ 5 mm). Image quality was assessed, including the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and signal-intensity ratio (SIR<sub>t/p</sub>, tumor/pancreas; SIR<sub>t/b</sub>, tumor/background). Diagnostic efficacy for vascular invasion was evaluated using the area under the curve (AUC) and compared across different slice thicknesses before and after reconstruction. SRT<sub>2</sub>-5 demonstrated higher SNR and SIR<sub>t/p</sub> compared to NRT<sub>2</sub>-5 (74.18 vs 72.46; 1.42 vs 1.30; p < 0.05). SRT<sub>2</sub>-3 showed increased SIR<sub>t/p</sub> and SIR<sub>t/b</sub> over NRT<sub>2</sub>-3 (1.35 vs 1.31; 2.73 vs 2.58; p < 0.05). SRT<sub>2</sub>-5 showed higher CNR, SIR<sub>t/p</sub> and SIR<sub>t/b</sub> than NRT<sub>2</sub>-3 (p < 0.05). NRT<sub>2</sub>-3 outperformed NRT<sub>2</sub>-5 in evaluating venous invasion (AUC: 0.732 vs 0.597, p = 0.021). SRR improved venous assessment (AUC: NRT<sub>2</sub>-3, 0.927 vs 0.732; NRT<sub>2</sub>-5, 0.823 vs 0.597; p < 0.05), and SRT<sub>2</sub>-5 exhibits comparable efficacy to NRT<sub>2</sub>-3 in venous assessment (AUC: 0.823 vs 0.732, p = 0.162). Thin-slice T<sub>2</sub>WI and SRR effectively improve the image quality and diagnostic efficacy for assessing venous invasion in PDAC. Thick-slice T<sub>2</sub>WI with SRR is a potential alternative to thin-slice T<sub>2</sub>WI. Both thin-slice T<sub>2</sub>-WI and SRR effectively improve image quality and diagnostic performance, providing valuable options for optimizing preoperative vascular assessment in PDAC. Non-invasive and accurate assessment of vascular invasion supports treatment planning and avoids futile surgery. Vascular invasion evaluation is critical for the surgical eligibility of PDAC. SRR improved image quality and vascular assessment in T<sub>2</sub>WI. Utilizing thin-slice T<sub>2</sub>WI and SRR aids in clinical decision making for PDAC.

Limited-angle SPECT image reconstruction using deep image prior.

Hori K, Hashimoto F, Koyama K, Hashimoto T

pubmed logopapersJun 30 2025
[Objective] In single-photon emission computed tomography (SPECT) image reconstruction, limited-angle conditions lead to a loss of frequency components, which distort the reconstructed tomographic image along directions corresponding to the non-collected projection angle range. Although conventional iterative image reconstruction methods have been used to improve the reconstructed images in limited-angle conditions, the image quality is still unsuitable for clinical use. We propose a limited-angle SPECT image reconstruction method that uses an end-to-end deep image prior (DIP) framework to improve reconstructed image quality.&#xD;[Approach] The proposed limited-angle SPECT image reconstruction is an end-to-end DIP framework which incorporates a forward projection model into the loss function to optimise the neural network. By also incorporating a binary mask that indicates whether each data point in the measured projection data has been collected, the proposed method restores the non-collected projection data and reconstructs a less distorted image.&#xD;[Main results] The proposed method was evaluated using 20 numerical phantoms and clinical patient data. In numerical simulations, the proposed method outperformed existing back-projection-based methods in terms of peak signal-to-noise ratio and structural similarity index measure. We analysed the reconstructed tomographic images in the frequency domain using an object-specific modulation transfer function, in simulations and on clinical patient data, to evaluate the response of the reconstruction method to different frequencies of the object. The proposed method significantly improved the response to almost all spatial frequencies, even in the non-collected projection angle range. The results demonstrate that the proposed method reconstructs a less distorted tomographic image.&#xD;[Significance] The proposed end-to-end DIP-based reconstruction method restores lost frequency components and mitigates image distortion under limited-angle conditions by incorporating a binary mask into the loss function.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric&#xD;accuracy of anatomical structures. However, to enhance realism, it is also important&#xD;to incorporate intra-organ detail. Because biological tissues are heterogeneous in&#xD;composition, virtual phantoms should reflect this by including realistic intra-organ&#xD;texture and material variation.&#xD;We propose training two 3D Double U-Net conditional generative adversarial&#xD;networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs&#xD;found within the torso. The model was trained on 378 CT image-segmentation&#xD;pairs taken from a publicly available dataset with 18 additional pairs reserved for&#xD;testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT&#xD;simulation platform.&#xD;Results showed that the deep learning model was able to synthesize realistic&#xD;heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were&#xD;compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06&#xD;HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)&#xD;were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy&#xD;between the generated and actual distribution was 0.0016. These metrics marked&#xD;an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current&#xD;homogeneous texture methods. The generated phantoms that underwent a virtual&#xD;CT scan had a closer visual resemblance to the true CT scan compared to the previous&#xD;method.&#xD;The resulting heterogeneous phantoms offer a significant step toward more realistic&#xD;in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity&#xD;to true anatomical variation.

Precision and Personalization: How Large Language Models Redefining Diagnostic Accuracy in Personalized Medicine - A Systematic Literature Review.

Aththanagoda AKNL, Kulathilake KASH, Abdullah NA

pubmed logopapersJun 30 2025
Personalized medicine aims to tailor medical treatments to the unique characteristics of each patient, but its effectiveness relies on achieving diagnostic accuracy to fully understand individual variability in disease response and treatment efficacy. This systematic literature review explores the role of large language models (LLMs) in enhancing diagnostic precision and supporting the advancement of personalized medicine. A comprehensive search was conducted across Web of Science, Science Direct, Scopus, and IEEE Xplore, targeting peer-reviewed articles published in English between January 2020 and March 2025 that applied LLMs within personalized medicine contexts. Following PRISMA guidelines, 39 relevant studies were selected and systematically analyzed. The findings indicate a growing integration of LLMs across key domains such as clinical informatics, medical imaging, patient-specific diagnosis, and clinical decision support. LLMs have shown potential in uncovering subtle data patterns critical for accurate diagnosis and personalized treatment planning. This review highlights the expanding role of LLMs in improving diagnostic accuracy in personalized medicine, offering insights into their performance, applications, and challenges, while also acknowledging limitations in generalizability due to variable model performance and dataset biases. The review highlights the importance of addressing challenges related to data privacy, model interpretability, and reliability across diverse clinical scenarios. For successful clinical integration, future research must focus on refining LLM technologies, ensuring ethical standards, and validating models continuously to safeguard effective and responsible use in healthcare environments.

Thoracic staging of lung cancers by <sup>18</sup>FDG-PET/CT: impact of artificial intelligence on the detection of associated pulmonary nodules.

Trabelsi M, Romdhane H, Ben-Sellem D

pubmed logopapersJun 30 2025
This study focuses on automating the classification of certain thoracic lung cancer stages in 3D <sup>18</sup>FDG-PET/CT images according to the 9th Edition of the TNM Classification for Lung Cancer (2024). By leveraging advanced segmentation and classification techniques, we aim to enhance the accuracy of distinguishing between T4 (pulmonary nodules) Thoracic M0 and M1a (pulmonary nodules) stages. Precise segmentation of pulmonary lobes using the Pulmonary Toolkit enables the identification of tumor locations and additional malignant nodules, ensuring reliable differentiation between ipsilateral and contralateral spread. A modified ResNet-50 model is employed to classify the segmented regions. The performance evaluation shows that the model achieves high accuracy. The unchanged class has the best recall 93% and an excellent F1 score 91%. The M1a (pulmonary nodules) class performs well with an F1 score of 94%, though recall is slightly lower 91%. For T4 (pulmonary nodules) Thoracic M0, the model shows balanced performance with an F1 score of 87%. The overall accuracy is 87%, indicating a robust classification model.

Enhanced abdominal multi-organ segmentation with 3D UNet and UNet +  + deep neural networks utilizing the MONAI framework.

Tejashwini PS, Thriveni J, Venugopal KR

pubmed logopapersJun 30 2025
Accurate segmentation of organs in the abdomen is a primary requirement for any medical analysis and treatment planning. In this study, we propose an approach based on 3D UNet and UNet +  + architectures implemented in the MONAI framework for addressing challenges that arise due to anatomical variability, complex shape rendering of organs, and noise in CT/MRI scans. The models can analyze information in three dimensions from volumetric data, making use of skip and dense connections, and optimizing the parameters using Secretary Bird Optimization (SBO), which together help in better feature extraction and boundary delineation of the structures of interest across sets of multi-organ tissues. The developed model's performance was evaluated on multiple datasets, ranging from Pancreas-CT to Liver-CT and BTCV. The results indicated that on the Pancreas-CT dataset, a DSC of 94.54% was achieved for 3D UNet, while a slightly higher DSC of 95.62% was achieved for 3D UNet +  +. Both models performed well on the Liver-CT dataset, with 3D UNet acquiring a DSC score of 95.67% and 3D UNet +  + a DSC score of 97.36%. And in the case of the BTCV dataset, both models had DSC values ranging from 93.42 to 95.31%. These results demonstrate the robustness and efficiency of the models presented for clinical applications and medical research in multi-organ segmentation. This study validates the proposed architectures, underpinning and accentuating accuracy in medical imaging, creating avenues for scalable solutions for complex abdominal-imaging tasks.

Bidirectional Prototype-Guided Consistency Constraint for Semi-Supervised Fetal Ultrasound Image Segmentation.

Lyu C, Han K, Liu L, Chen J, Ma L, Pang Z, Liu Z

pubmed logopapersJun 30 2025
Fetal ultrasound (US) image segmentation plays an important role in fetal development assessment, maternal pregnancy management, and intrauterine surgery planning. However, obtaining large-scale, accurately annotated fetal US imaging data is time-consuming and labor-intensive, posing challenges to the application of deep learning in this field. To address this challenge, we propose a semi-supervised fetal US image segmentation method based on bidirectional prototype-guided consistency constraint (BiPCC). BiPCC utilizes the prototype to bridge labeled and unlabeled data and establishes interaction between them. Specifically, the model generates pseudo-labels using prototypes from labeled data and then utilizes these pseudo-labels to generate pseudo-prototypes for segmenting the labeled data inversely, thereby achieving bidirectional consistency. Additionally, uncertainty-based cross-supervision is incorporated to provide additional supervision signals, thereby enhancing the quality of pseudo-labels. Extensive experiments on two fetal US datasets demonstrate that BiPCC outperforms state-of-the-art methods for semi-supervised fetal US segmentation. Furthermore, experimental results on two additional medical segmentation datasets exhibit BiPCC's outstanding generalization capability for diverse medical image segmentation tasks. Our proposed method offers a novel insight for semi-supervised fetal US image segmentation and holds promise for further advancing the development of intelligent healthcare.

Development and validation of a prognostic prediction model for lumbar-disc herniation based on machine learning and fusion of clinical text data and radiomic features.

Wang Z, Zhang H, Li Y, Zhang X, Liu J, Ren Z, Qin D, Zhao X

pubmed logopapersJun 30 2025
Based on preoperative clinical text data and lumbar magnetic resonance imaging (MRI), we applied machine learning (ML) algorithms to construct a model that would predict early recurrence in lumbar-disc herniation (LDH) patients who underwent percutaneous endoscopic lumbar discectomy (PELD). We then explored the clinical performance of this prognostic prediction model via multimodal-data fusion. Clinical text data and radiological images of LDH patients who underwent PELD at the Intervertebral Disc Center of the Affiliated Hospital of Gansu University of Traditional Chinese Medicine (AHGUTCM; Lanzhou, China) were retrospectively collected. Two radiologists with clinical-image reading experience independently outlined regions of interest (ROI) on the MRI images and extracted radiomic features using 3D Slicer software. We then randomly separated the samples into a training set and a test set at a 7:3 ratio, used eight ML algorithms to construct predictive radiomic-feature models, evaluated model performance by the area under the curve (AUC), and selected the optimal model for screening radiomic features and calculating radiomic scores (Rad-scores). Finally, after using logistic regression to construct a nomogram for predicting the early-recurrence rate, we evaluated the nomogram's clinical applicability using a clinical-decision curve. We initially extracted 851 radiomic features. After constructing our models, we determined based on AUC values that the optimal ML algorithm was least absolute shrinkage and selection operator (LASSO) regression, which had an AUC of 0.76 and an accuracy rate of 91%. After screening features using the LASSO model, we predicted Rad-score for each sample of recurrent LDH using nine radiomic features. Next, we fused three of these clinical features -age, diabetes, and heavy manual labor-to construct a nomogram with an AUC of 0.86 (95% confidence interval [CI], 0.79-0.94). Analysis of the clinical-decision and impact curves showed that the prognostic prediction model with multimodal-data fusion had good clinical validity and applicability. We developed and analyzed a prognostic prediction model for LDH with multimodal-data fusion. Our model demonstrated good performance in predicting early postoperative recurrence in LDH patients; therefore, it has good prospects for clinical application and can provide clinicians with objective, accurate information to help them decide on presurgical treatment plans. However, external-validation studies are still needed to further validate the model's comprehensive performance and improve its generalization and extrapolation.

Radiation Dose Reduction and Image Quality Improvement of UHR CT of the Neck by Novel Deep-learning Image Reconstruction.

Messerle DA, Grauhan NF, Leukert L, Dapper AK, Paul RH, Kronfeld A, Al-Nawas B, Krüger M, Brockmann MA, Othman AE, Altmann S

pubmed logopapersJun 30 2025
We evaluated a dedicated dose-reduced UHR-CT for head and neck imaging, combined with a novel deep learning reconstruction algorithm to assess its impact on image quality and radiation exposure. Retrospective analysis of ninety-eight consecutive patients examined using a new body weight-adapted protocol. Images were reconstructed using adaptive iterative dose reduction and advanced intelligent Clear-IQ engine with an already established (DL-1) and a newly implemented reconstruction algorithm (DL-2). Additional thirty patients were scanned without body-weight-adapted dose reduction (DL-1-SD). Three readers evaluated subjective image quality regarding image quality and assessment of several anatomic regions. For objective image quality, signal-to-noise ratio and contrast-to-noise ratio were calculated for temporalis and masseteric muscle and the floor of the mouth. Radiation dose was evaluated by comparing the computed tomography dose index (CTDIvol) values. Deep learning-based reconstruction algorithms significantly improved subjective image quality (diagnostic acceptability: DL‑1 vs AIDR OR of 25.16 [6.30;38.85], p < 0.001 and DL‑2 vs AIDR 720.15 [410.14;> 999.99], p < 0.001). Although higher doses (DL-1-SD) resulted in significantly enhanced image quality, DL‑2 demonstrated significant superiority over all other techniques across all defined parameters (p < 0.001). Similar results were demonstrated for objective image quality, e.g. image noise (DL‑1 vs AIDR OR of 19.0 [11.56;31.24], p < 0.001 and DL‑2 vs AIDR > 999.9 [825.81;> 999.99], p < 0.001). Using weight-adapted kV reduction, very low radiation doses could be achieved (CTDIvol: 7.4 ± 4.2 mGy). AI-based reconstruction algorithms in ultra-high resolution head and neck imaging provide excellent image quality while achieving very low radiation exposure.
Page 179 of 3383378 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.