Sort by:
Page 518 of 6346332 results

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.

Ratiphunpong P, Inmutto N, Angkurawaranon S, Wantanajittikul K, Suwannasak A, Yarach U

pubmed logopapersJun 1 2025
To develop and evaluate a deep learning technique for the differentiation of hepatocellular carcinoma (HCC) using "simplified intravoxel incoherent motion (IVIM) parameters" derived from only 3 b-value images. Ninety-eight retrospective magnetic resonance imaging data were collected (68 men, 30 women; mean age 59 ± 14 years), including T2-weighted imaging with fat suppression, in-phase, out-of-phase, and diffusion-weighted imaging (b = 0, 100, 800 s/mm2). Ninety percent of data were used for stratified 10-fold cross-validation. After data preprocessing, diffusion-weighted imaging images were used to compute simplified IVIM and apparent diffusion coefficient (ADC) maps. A 17-layer 3D convolutional neural network (3D-CNN) was implemented, and the input channels were modified for different strategies of input images. The 3D-CNN with IVIM maps (ADC, f, and D*) demonstrated superior performance compared with other strategies, achieving an accuracy of 83.25 ± 6.24% and area under the receiver-operating characteristic curve of 92.70 ± 8.24%, significantly surpassing the baseline of 50% (P < 0.05) and outperforming other strategies in all evaluation metrics. This success underscores the effectiveness of simplified IVIM parameters in combination with a 3D-CNN architecture for enhancing HCC differentiation accuracy. Simplified IVIM parameters derived from 3 b-values, when integrated with a 3D-CNN architecture, offer a robust framework for HCC differentiation.

Shi H, Ding K, Yang XT, Wu TF, Zheng JY, Wang LF, Zhou BY, Sun LP, Zhang YF, Zhao CK, Xu HX

pubmed logopapersJun 1 2025
Preoperative identification of genetic mutations is conducive to individualized treatment and management of papillary thyroid carcinoma (PTC) patients. <i>Purpose</i>: To investigate the predictive value of the machine learning (ML)-based ultrasound (US) radiomics approaches for BRAF V600E and TERT promoter status (individually and coexistence) in PTC. This multicenter study retrospectively collected data of 1076 PTC patients underwent genetic testing detection for BRAF V600E and TERT promoter between March 2016 and December 2021. Radiomics features were extracted from routine grayscale ultrasound images, and gene status-related features were selected. Then these features were included to nine different ML models to predicting different mutations, and optimal models plus statistically significant clinical information were also conducted. The models underwent training and testing, and comparisons were performed. The Decision Tree-based US radiomics approach had superior prediction performance for the BRAF V600E mutation compared to the other eight ML models, with an area under the curve (AUC) of 0.767 versus 0.547-0.675 (p < 0.05). The US radiomics methodology employing Logistic Regression exhibited the highest accuracy in predicting TERT promoter mutations (AUC, 0.802 vs. 0.525-0.701, p < 0.001) and coexisting BRAF V600E and TERT promoter mutations (0.805 vs. 0.678-0.743, p < 0.001) within the test set. The incorporation of clinical factors enhanced predictive performances to 0.810 for BRAF V600E mutant, 0.897 for TERT promoter mutations, and 0.900 for dual mutations in PTCs. The machine learning-based US radiomics methods, integrated with clinical characteristics, demonstrated effectiveness in predicting the BRAF V600E and TERT promoter mutations in PTCs.

Metsch JM, Hauschild AC

pubmed logopapersJun 1 2025
The increasing digitalization of multi-modal data in medicine and novel artificial intelligence (AI) algorithms opens up a large number of opportunities for predictive models. In particular, deep learning models show great performance in the medical field. A major limitation of such powerful but complex models originates from their 'black-box' nature. Recently, a variety of explainable AI (XAI) methods have been introduced to address this lack of transparency and trust in medical AI. However, the majority of such methods have solely been evaluated on single data modalities. Meanwhile, with the increasing number of XAI methods, integrative XAI frameworks and benchmarks are essential to compare their performance on different tasks. For that reason, we developed BenchXAI, a novel XAI benchmarking package supporting comprehensive evaluation of fifteen XAI methods, investigating their robustness, suitability, and limitations in biomedical data. We employed BenchXAI to validate these methods in three common biomedical tasks, namely clinical data, medical image and signal data, and biomolecular data. Our newly designed sample-wise normalization approach for post-hoc XAI methods enables the statistical evaluation and visualization of performance and robustness. We found that the XAI methods Integrated Gradients, DeepLift, DeepLiftShap, and GradientShap performed well over all three tasks, while methods like Deconvolution, Guided Backpropagation, and LRP-α1-β0 struggled for some tasks. With acts such as the EU AI Act the application of XAI in the biomedical domain becomes more and more essential. Our evaluation study represents a first step towards verifying the suitability of different XAI methods for various medical domains.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

Kiso T, Okada Y, Kawata S, Shichiji K, Okumura E, Hatsumi N, Matsuura R, Kaminaga M, Kuwano H, Okumura E

pubmed logopapersJun 1 2025
To evaluate the usefulness of radiomics features extracted from ultrasonographic images in diagnosing and predicting the severity of knee osteoarthritis (OA). In this single-center, prospective, observational study, radiomics features were extracted from standing radiographs and ultrasonographic images of knees of patients aged 40-85 years with primary medial OA and without OA. Analysis was conducted using LIFEx software (version 7.2.n), ANOVA, and LASSO regression. The diagnostic accuracy of three different models, including a statistical model incorporating background factors and machine learning models, was evaluated. Among 491 limbs analyzed, 318 were OA and 173 were non-OA cases. The mean age was 72.7 (±8.7) and 62.6 (±11.3) years in the OA and non-OA groups, respectively. The OA group included 81 (25.5 %) men and 237 (74.5 %) women, whereas the non-OA group included 73 men (42.2 %) and 100 (57.8 %) women. A statistical model using the cutoff value of MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) achieved a specificity of 0.98 and sensitivity of 0.47. Machine learning diagnostic models (Model 2) demonstrated areas under the curve (AUCs) of 0.88 (discriminant analysis) and 0.87 (logistic regression), with sensitivities of 0.80 and 0.81 and specificities of 0.82 and 0.80, respectively. For severity prediction, the statistical model using MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) showed sensitivity and specificity values of 0.78 and 0.86, respectively, whereas machine learning models achieved an AUC of 0.92, sensitivity of 0.81, and specificity of 0.85 for severity prediction. The use of radiomics features in diagnosing knee OA shows potential as a supportive tool for enhancing clinicians' decision-making.

Vierula JP, Merisaari H, Heikkinen J, Happonen T, Sirén A, Velhonoja J, Irjala H, Soukka T, Mattila K, Nyman M, Nurminen J, Hirvonen J

pubmed logopapersJun 1 2025
We assessed risk factors and developed a score to predict intensive care unit (ICU) admissions using MRI findings and clinical data in acute neck infections. This retrospective study included patients with MRI-confirmed acute neck infection. Abscess diameters were measured on post-gadolinium T1-weighted Dixon MRI, and specific edema patterns, retropharyngeal (RPE) and mediastinal edema, were assessed on fat-suppressed T2-weighted Dixon MRI. A multivariate logistic regression model identified ICU admission predictors, with risk scores derived from regression coefficients. Model performance was evaluated using the area under the curve (AUC) from receiver operating characteristic analysis. Machine learning models (random forest, XGBoost, support vector machine, neural networks) were tested. The sample included 535 patients, of whom 373 (70 %) had an abscess, and 62 (12 %) required ICU treatment. Significant predictors for ICU admission were RPE, maximal abscess diameter (≥40 mm), and C-reactive protein (CRP) (≥172 mg/L). The risk score (0-7) (AUC=0.82, 95 % confidence interval [CI] 0.77-0.88) outperformed CRP (AUC=0.73, 95 % CI 0.66-0.80, p = 0.001), maximal abscess diameter (AUC=0.72, 95 % CI 0.64-0.80, p < 0.001), and RPE (AUC=0.71, 95 % CI 0.65-0.77, p < 0.001). The risk score at a cut-off > 3 yielded the following metrics: sensitivity 66 %, specificity 82 %, positive predictive value 33 %, negative predictive value 95 %, accuracy 80 %, and odds ratio 9.0. Discriminative performance was robust in internal (AUC=0.83) and hold-out (AUC=0.81) validations. ML models were not better than regression models. A risk model incorporating RPE, abscess size, and CRP showed moderate accuracy and high negative predictive value for ICU admissions, supporting MRI's role in acute neck infections.

Snyder J, Blevins G, Smyth P, Wilman AH

pubmed logopapersJun 1 2025
Quantitative T1 and T2 mapping is a useful tool to assess properties of healthy and diseased tissues. However, clinical diagnostic imaging remains dominated by relaxation-weighted imaging without direct collection of relaxation maps. Dedicated research sequences such as MR fingerprinting can save time and improve resolution over classical gold standard quantitative MRI (qMRI) methods, although they are not widely adopted in clinical studies. We investigate the use of clinical sequences in conjunction with prior knowledge provided by machine learning to elucidate T1 maps of brain in routine imaging studies without the need for specialized sequences. A classification learner was trained on T1w (magnetization prepared rapid gradient echo [MPRAGE]) and T2w (fluid-attenuated inversion recovery [FLAIR]) data (2.6 million voxels) from multiple sclerosis (MS) patients at 3T, compared to gold standard inversion recovery fast spin echo T1 maps in five healthy subjects, and tested on eight MS patients. In the MS patient test, the results of the machine learner-produced T1 maps were compared to MP2RAGE and MR fingerprinting T1 maps in seven tissue regions of the brain: cortical grey matter, white matter, cerebrospinal fluid, caudate, putamen and globus pallidus. Additionally, T1s in lesion-segmented tissue was compared using the three different methods. The machine learner (ML) method had excellent agreement with MP2RAGE, with all average tissue deviations less than 3.2%, with T1 lesion variation of 0.1%-5.3% across the eight patients. The machine learning method provides a valuable and accurate estimation of T1 values in the human brain while using data from standard clinical sequences and allowing retrospective reconstruction from past studies without the need for new quantitative techniques.

Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y

pubmed logopapersJun 1 2025
Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.
Page 518 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.