Sort by:
Page 524 of 7547540 results

Yuan M, Jie B, Han R, Wang J, Zhang Y, Li Z, Zhu J, Zhang R, He Y

pubmed logopapersJul 1 2025
With developments in computer science and technology, great progress has been made in three-dimensional (3D) ultrasound. Recently, ultrasound-based 3D bone modelling has attracted much attention, and its accuracy has been studied for the femur, tibia, and spine. The use of ultrasound allows data for bone surface to be acquired non-invasively and without radiation. Freehand 3D ultrasound of the bone surface can be roughly divided into two steps: segmentation of the bone surface from two-dimensional (2D) ultrasound images and 3D reconstruction of the bone surface using the segmented images. The aim of this study was to develop an automatic algorithm to segment the midface bone surface from 2D ultrasound images based on deep learning methods. Six deep learning networks were trained (nnU-Net, U-Net, ConvNeXt, Mask2Former, SegFormer, and DDRNet). The performance of the algorithms was compared with that of the ground truth and evaluated by Dice coefficient (DC), intersection over union (IoU), 95th percentile Hausdorff distance (HD95), average symmetric surface distance (ASSD), precision, recall, and time. nnU-Net yielded the highest DC of 89.3% ± 13.6% and the lowest ASSD of 0.11 ± 0.40 mm. This study showed that nnU-Net can automatically and effectively segment the midfacial bone surface from 2D ultrasound images.

Deng T, Feng J, Le X, Xia Y, Shi F, Yu F, Zhan Y, Liu X, Li C

pubmed logopapersJul 1 2025
In clinical work, there are difficulties in distinguishing pulmonary contusion(PC) from bacterial pneumonia(BP) on CT images by the naked eye alone when the history of trauma is unknown. Artificial intelligence is widely used in medical imaging, but its diagnostic performance for pulmonary contusion is unclear. In this study, artificial intelligence was used for the first time to identify lung contusion and bacterial pneumonia, and its diagnostic performance was compared with that of manual. In this retrospective study, 2179 patients between April 2016 and July 2022 from two hospitals were collected and divided into a training set, an internal validation set, an external validation set. PC and BP were automatically recognized, segmented using VB-net and radiomics features were automatically extracted. Four machine learning algorithms including Decision Trees, Logistic Regression, Random Forests and Support Vector Machines(SVM) were using to built the models. De-long test was used to compare the performance among models. The best performing model and four radiologists diagnosed the external validation set, and compare the diagnostic efficacy of human and artificial intelligence. VB-net automatically detected and segmented PC and BP. Among the four machine learning models we've built, De-long test showed that SVM model had the best performance, with AUC, accuracy, sensitivity, and specificity of 0.998 (95% CI: 0.995-1), 0.980, 0.979, 0.982 in the training set, 0.891 (95% CI: 0.854-0.928), 0.979, 0.750, 0.860 in the internal validation set, 0.885 (95% CI: 0.850-0.920), 0.903, 0.976, 0.794 in the external validation set. The diagnostic ability of the SVM model was superior to that of human (P < 0.05). Our VB-net automatically recognizes and segments PC and BP in chest CT images. SVM model based on radiomics features can quickly and accurately differentiate between them with higher accuracy than experienced radiologist.

Mehranian A, Wollenweber SD, Bradley KM, Fielding PA, Huellner M, Iagaru A, Dedja M, Colwell T, Kotasidis F, Johnsen R, Jansen FP, McGowan DR

pubmed logopapersJul 1 2025
To evaluate a deep learning-based time-of-flight (DLToF) model trained to enhance the image quality of non-ToF PET images for different tracers, reconstructed using BSREM algorithm, towards ToF images. A 3D residual U-NET model was trained using 8 different tracers (FDG: 75% and non-FDG: 25%) from 11 sites from US, Europe and Asia. A total of 309 training and 33 validation datasets scanned on GE Discovery MI (DMI) ToF scanners were used for development of DLToF models of three strengths: low (L), medium (M) and high (H). The training and validation pairs consisted of target ToF and input non-ToF BSREM reconstructions using site-preferred regularisation parameters (beta values). The contrast and noise properties of each model were defined by adjusting the beta value of target ToF images. A total of 60 DMI datasets, consisting of a set of 4 tracers (<sup>18</sup>F-FDG, <sup>18</sup>F-PSMA, <sup>68</sup>Ga-PSMA, <sup>68</sup>Ga-DOTATATE) and 15 exams each, were collected for testing and quantitative analysis of the models based on standardized uptake value (SUV) in regions of interest (ROI) placed in lesions, lungs and liver. Each dataset includes 5 image series: ToF and non-ToF BSREM and three DLToF images. The image series (300 in total) were blind scored on a 5-point Likert score by 4 readers based on lesion detectability, diagnostic confidence, and image noise/quality. In lesion SUV<sub>max</sub> quantification with respect to ToF BSREM, DLToF-H achieved the best results among the three models by reducing the non-ToF BSREM errors from -39% to -6% for <sup>18</sup>F-FDG (38 lesions); from -42% to -7% for <sup>18</sup>F-PSMA (35 lesions); from -34% to -4% for <sup>68</sup>Ga-PSMA (23 lesions) and from -34% to -12% for <sup>68</sup>Ga-DOTATATE (32 lesions). Quantification results in liver and lung also showed ToF-like performance of DLToF models. Clinical reader resulted showed that DLToF-H results in an improved lesion detectability on average for all four radiotracers whereas DLToF-L achieved the highest scores for image quality (noise level). The results of DLToF-M however showed that this model results in the best trade-off between lesion detection and noise level and hence achieved the highest score for diagnostic confidence on average for all radiotracers. This study demonstrated that the DLToF models are suitable for both FDG and non-FDG tracers and could be utilized for digital BGO PET/CT scanners to provide an image quality and lesion detectability comparable and close to ToF.

Koett M, Melchior F, Artamonova N, Bektic J, Heidegger I

pubmed logopapersJul 1 2025
This review provides a critical analysis of recent advancements in active surveillance (AS), emphasizing updates from major international guidelines and their implications for clinical practice. Recent revisions to international guidelines have broadened the eligibility criteria for AS to include selected patients with ISUP grade group 2 prostate cancer. This adjustment acknowledges that certain intermediate-risk cancers may be appropriate for AS, reflecting a heightened focus on achieving a balance between oncologic control and maintaining quality of life by minimizing the risk of overtreatment. This review explores key innovations in AS for prostate cancer, including multi parametric magnetic resonance imaging (mpMRI), genomic biomarkers, and risk calculators, which enhance patient selection and monitoring. While promising, their routine use remains debated due to guideline inconsistencies, cost, and accessibility. Special focus is given to biomarkers for identifying ISUP grade group 2 cancers suitable for AS. Additionally, the potential of artificial intelligence to improve diagnostic accuracy and risk stratification is examined. By integrating these advancements, this review provides a critical perspective on optimizing AS for more personalized and effective prostate cancer management.

Takahashi Y, Sugino T, Onogi S, Nakajima Y, Masuda K

pubmed logopapersJul 1 2025
Accurate three-dimensional (3D) segmentation of hepatic vascular networks is crucial for supporting ultrasound-mediated theranostics for liver diseases. Despite advancements in deep learning techniques, accurate segmentation remains challenging due to ultrasound image quality issues, including intensity and contrast fluctuations. This study introduces intensity transformation-based data augmentation methods to improve deep convolutional neural network-based segmentation of hepatic vascular networks. We employed a 3D U-Net, which leverages spatial contextual information, as the baseline. To address intensity and contrast fluctuations and improve 3D U-Net performance, we implemented data augmentation using high-contrast intensity transformation with S-shaped tone curves and low-contrast intensity transformation with Gamma and inverse S-shaped tone curves. We conducted validation experiments on 78 ultrasound volumes to evaluate the effect of both geometric and intensity transformation-based data augmentations. We found that high-contrast intensity transformation-based data augmentation decreased segmentation accuracy, while low-contrast intensity transformation-based data augmentation significantly improved Recall and Dice. Additionally, combining geometric and low-contrast intensity transformation-based data augmentations, through an OR operation on their results, further enhanced segmentation accuracy, achieving improvements of 9.7% in Recall and 3.3% in Dice. This study demonstrated the effectiveness of low-contrast intensity transformation-based data augmentation in improving volumetric segmentation of hepatic vascular networks from ultrasound volumes.

Cai L, Golatta M, Sidey-Gibbons C, Barr RG, Pfob A

pubmed logopapersJul 1 2025
Artificial Intelligence models based on medical (imaging) data are increasingly developed. However, the imaging software on which the original data is generated is frequently updated. The impact of updated imaging software on the performance of AI models is unclear. We aimed to develop machine learning models using shear wave elastography (SWE) data to identify malignant breast lesions and to test the models' generalizability by validating them on external data generated by both the original updated software versions. We developed and validated different machine learning models (GLM, MARS, XGBoost, SVM) using multicenter, international SWE data (NCT02638935) using tenfold cross-validation. Findings were compared to the histopathologic evaluation of the biopsy specimen or 2-year follow-up. The outcome measure was the area under the curve (AUROC). We included 1288 cases in the development set using the original imaging software and 385 cases in the validation set using both, original and updated software. In the external validation set, the GLM and XGBoost models showed better performance with the updated software data compared to the original software data (AUROC 0.941 vs. 0.902, p < 0.001 and 0.934 vs. 0.872, p < 0.001). The MARS model showed worse performance with the updated software data (0.847 vs. 0.894, p = 0.045). SVM was not calibrated. In this multicenter study using SWE data, some machine learning models demonstrated great potential to bridge the gap between original software and updated software, whereas others exhibited weak generalizability.

Lian L, Chang Q

pubmed logopapersJul 1 2025
Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.

Banks SA, Yildirim G, Jachode G, Cox J, Anderson O, Jensen A, Cole JD, Kessler O

pubmed logopapersJul 1 2025
Knee kinematics during daily activities reflect disease severity preoperatively and are associated with clinical outcomes after total knee arthroplasty (TKA). It is widely believed that measured kinematics would be useful for preoperative planning and postoperative assessment. Despite decades-long interest in measuring three-dimensional (3D) knee kinematics, no methods are available for routine, practical clinical examinations. We report a clinically practical method utilizing machine-learning-enhanced software and upgraded C-arm fluoroscopy for the accurate and time-efficient measurement of pre-TKA and post-TKA 3D dynamic knee kinematics. Using a common C-arm with an upgraded detector and software, we performed an 8-s horizontal sweeping pulsed fluoroscopic scan of the weight-bearing knee joint. The patient's knee was then imaged using pulsed C-arm fluoroscopy while performing standing, kneeling, squatting, stair, chair, and gait motion activities. We used limited-arc cone-beam reconstruction methods to create 3D models of the femur and tibia/fibula bones with implants, which can then be used to perform model-image registration to quantify the 3D knee kinematics. The proposed protocol can be accomplished by an individual radiology technician in ten minutes and does not require additional equipment beyond a step and stool. The image analysis can be performed by a computer onboard the upgraded c-arm or in the cloud, before loading the examination results into the Picture Archiving and Communication System and Electronic Medical Record systems. Weight-bearing kinematics affects knee function pre- and post-TKA. It has long been exclusively the domain of researchers to make such measurements. We present an approach that leverages common, but digitally upgraded, imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. With these capabilities, it will be possible to include dynamic 3D knee kinematics as a component of the routine clinical workup for patients who have diseased or replaced knees.

Khan SD, Basalamah S, Lbath A

pubmed logopapersJul 1 2025
Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

Lee TY, Yoon JH, Park JY, Park SH, Kim H, Lee CM, Choi Y, Lee JM

pubmed logopapersJul 1 2025
The aim of this study was to intraindividually compare the conspicuity of focal liver lesions (FLLs) between low- and ultra-low-dose computed tomography (CT) with deep learning reconstruction (DLR) and standard-dose CT with model-based iterative reconstruction (MBIR) from a single CT using dual-split scan in patients with suspected liver metastasis via a noninferiority design. This prospective study enrolled participants who met the eligibility criteria at 2 tertiary hospitals in South Korea from June 2022 to January 2023. The criteria included ( a ) being aged between 20 and 85 years and ( b ) having suspected or known liver metastases. Dual-source CT scans were conducted, with the standard radiation dose divided in a 2:1 ratio between tubes A and B (67% and 33%, respectively). The voltage settings of 100/120 kVp were selected based on the participant's body mass index (<30 vs ≥30 kg/m 2 ). For image reconstruction, MBIR was utilized for standard-dose (100%) images, whereas DLR was employed for both low-dose (67%) and ultra-low-dose (33%) images. Three radiologists independently evaluated FLL conspicuity, the probability of metastasis, and subjective image quality using a 5-point Likert scale, in addition to quantitative signal-to-noise and contrast-to-noise ratios. The noninferiority margins were set at -0.5 for conspicuity and -0.1 for detection. One hundred thirty-three participants (male = 58, mean body mass index = 23.0 ± 3.4 kg/m 2 ) were included in the analysis. The low- and ultra-low- dose had a lower radiation dose than the standard-dose (median CT dose index volume: 3.75, 1.87 vs 5.62 mGy, respectively, in the arterial phase; 3.89, 1.95 vs 5.84 in the portal venous phase, P < 0.001 for all). Median FLL conspicuity was lower in the low- and ultra-low-dose scans compared with the standard-dose (3.0 [interquartile range, IQR: 2.0, 4.0], 3.0 [IQR: 1.0, 4.0] vs 3.0 [IQR: 2.0, 4.0] in the arterial phase; 4.0 [IQR: 1.0, 5.0], 3.0 [IQR: 1.0, 4.0] vs 4.0 [IQR: 2.0, 5.0] in the portal venous phases), yet within the noninferiority margin ( P < 0.001 for all). FLL detection was also lower but remained within the margin (lesion detection rate: 0.772 [95% confidence interval, CI: 0.727, 0.812], 0.754 [0.708, 0.795], respectively) compared with the standard-dose (0.810 [95% CI: 0.770, 0.844]). Sensitivity for liver metastasis differed between the standard- (80.6% [95% CI: 76.0, 84.5]), low-, and ultra-low-doses (75.7% [95% CI: 70.2, 80.5], 73.7 [95% CI: 68.3, 78.5], respectively, P < 0.001 for both), whereas specificity was similar ( P > 0.05). Low- and ultra-low-dose CT with DLR showed noninferior FLL conspicuity and detection compared with standard-dose CT with MBIR. Caution is needed due to a potential decrease in sensitivity for metastasis ( clinicaltrials.gov/NCT05324046 ).
Page 524 of 7547540 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.