Sort by:
Page 167 of 1731730 results

Deep Learning-Based CT-Less Cardiac Segmentation of PET Images: A Robust Methodology for Multi-Tracer Nuclear Cardiovascular Imaging.

Salimi Y, Mansouri Z, Nkoulou R, Mainta I, Zaidi H

pubmed logopapersMay 6 2025
Quantitative cardiovascular PET/CT imaging is useful in the diagnosis of multiple cardiac perfusion and motion pathologies. The common approach for cardiac segmentation consists in using co-registered CT images, exploiting publicly available deep learning (DL)-based segmentation models. However, the mismatch between structural CT images and PET uptake limits the usefulness of these approaches. Besides, the performance of DL models is not consistent over low-dose or ultra-low-dose CT images commonly used in clinical PET/CT imaging. In this work, we developed a DL-based methodology to tackle this issue by segmenting directly cardiac PET images. This study included 406 cardiac PET images from 146 patients (43 <sup>18</sup>F-FDG, 329 <sup>13</sup>N-NH<sub>3</sub>, and 37 <sup>82</sup>Rb images). Using previously trained DL nnU-Net models in our group, we segmented the whole heart and the three main cardiac components, namely the left myocardium (LM), left ventricle cavity (LV), and right ventricle (RV) on co-registered CT images. The segmentation was resampled to PET resolution and edited through a combination of automated image processing and manual correction. The corrected segmentation masks and SUV PET images were fed to a nnU-Net V2 pipeline to be trained in fivefold data split strategy by defining two tasks: task #1 for whole cardiac segmentation and task #2 for segmentation of three cardiac components. Fifteen cardiac images were used as external validation set. The DL delineated masks were compared with standard of reference masks using Dice coefficient, Jaccard distance, mean surface distance, and segment volume relative error (%). Task #1 average Dice coefficient in internal validation fivefold was 0.932 ± 0.033. The average Dice on the 15 external cases were comparable with the fivefold Dice reaching an average of 0.941 ± 0.018. Task #2 average Dice in fivefold validation was 0.88 ± 0.063, 0.828 ± 0.091, and 0.876 ± 0.062 for LM, LV, and RV, respectively. There was no statistically significant difference among the Dice coefficients, neither between images acquired by three radiotracers nor between the different folds (P-values >  > 0.05). The overall average volume prediction error in cardiac components segmentation was less than 2%. We developed an automated DL-based segmentation pipeline to segment the whole heart and cardiac components with acceptable accuracy and robust performance in the external test set and over three radiotracers used in nuclear cardiovascular imaging. The proposed methodology can overcome unreliable segmentations performed on CT images.

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

Bai Y, An ZC, Du LF, Li F, Cai YY

pubmed logopapersMay 6 2025
The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.

Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.

Khan M, Su'ud MM, Alam MM, Karimullah S, Shaik F, Subhan F

pubmed logopapersMay 6 2025
Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.

Corticospinal tract reconstruction with tumor by using a novel direction filter based tractography method.

Zeng Q, Xia Z, Huang J, Xie L, Zhang J, Huang S, Xing Z, Zhuge Q, Feng Y

pubmed logopapersMay 6 2025
The corticospinal tract (CST) is the primary neural pathway responsible for voluntary motor functions, and preoperative CST reconstruction is crucial for preserving nerve functions during neurosurgery. Diffusion magnetic resonance imaging-based tractography is the only noninvasive method to preoperatively reconstruct CST in clinical practice. However, for the largesize bundle CST with complex fiber geometry (fanning fibers), reconstructing its full extent remains challenging with local-derived methods without incorporating global information. Especially in the presence of tumors, the mass effect and partial volume effect cause abnormal diffusion signals. In this work, a CST reconstruction tractography method based on a novel direction filter was proposed, designed to ensure robust CST reconstruction in the clinical dataset with tumors. A direction filter based on a fourth-order differential equation was introduced for global direction estimation. By considering the spatial consistency and leveraging anatomical prior knowledge, the direction filter was computed by minimizing the energy between the target directions and initial fiber directions. On the basis of the new directions corresponding to CST obtained by the direction filter, the fiber tracking method was implemented to reconstruct the fiber trajectory. Additionally, a deep learning-based method along with tractography template prior information was employed to generate the regions of interest (ROIs) and initial fiber directions. Experimental results showed that the proposed method yields higher valid connections and lower no connections and exhibits the fewest broken fibers and short-connected fibers. The proposed method offers an effective tool to enhance CST-related surgical outcomes by optimizing tumor resection and preserving CST.

Artificial intelligence applications for the diagnosis of pulmonary nodules.

Ost DE

pubmed logopapersMay 6 2025
This review evaluates the role of artificial intelligence (AI) in diagnosing solitary pulmonary nodules (SPNs), focusing on clinical applications and limitations in pulmonary medicine. It explores AI's utility in imaging and blood/tissue-based diagnostics, emphasizing practical challenges over technical details of deep learning methods. AI enhances computed tomography (CT)-based computer-aided diagnosis (CAD) through steps like nodule detection, false positive reduction, segmentation, and classification, leveraging convolutional neural networks and machine learning. Segmentation achieves Dice similarity coefficients of 0.70-0.92, while malignancy classification yields areas under the curve of 0.86-0.97. AI-driven blood tests, incorporating RNA sequencing and clinical data, report AUCs up to 0.907 for distinguishing benign from malignant nodules. However, most models lack prospective, multiinstitutional validation, risking overfitting and limited generalizability. The "black box" nature of AI, coupled with overlapping inputs (e.g., nodule size, smoking history) with physician assessments, complicates integration into clinical workflows and precludes standard Bayesian analysis. AI shows promise for SPN diagnosis but requires rigorous validation in diverse populations and better clinician training for effective use. Rather than replacing judgment, AI should serve as a second opinion, with its reported performance metrics understood as study-specific, not directly applicable at the bedside due to double-counting issues.

V3DQutrit a volumetric medical image segmentation based on 3D qutrit optimized modified tensor ring model.

Verma P, Kumar H, Shukla DK, Satpathy S, Alsekait DM, Khalaf OI, Alzoubi A, Alqadi BS, AbdElminaam DS, Kushwaha A, Singh J

pubmed logopapersMay 6 2025
This paper introduces 3D-QTRNet, a novel quantum-inspired neural network for volumetric medical image segmentation. Unlike conventional CNNs, which suffer from slow convergence and high complexity, and QINNs, which are limited to grayscale segmentation, our approach leverages qutrit encoding and tensor ring decomposition. These techniques improve segmentation accuracy, optimize memory usage, and accelerate model convergence. The proposed model demonstrates superior performance on the BRATS19 and Spleen datasets, outperforming state-of-the-art CNN and quantum models in terms of Dice similarity and segmentation precision. This work bridges the gap between quantum computing and medical imaging, offering a scalable solution for real-world applications.

Diagnosis of Sarcopenia Using Convolutional Neural Network Models Based on Muscle Ultrasound Images: Prospective Multicenter Study.

Chen ZT, Li XL, Jin FS, Shi YL, Zhang L, Yin HH, Zhu YL, Tang XY, Lin XY, Lu BL, Wang Q, Sun LP, Zhu XX, Qiu L, Xu HX, Guo LH

pubmed logopapersMay 6 2025
Early detection is clinically crucial for the strategic handling of sarcopenia, yet the screening process, which includes assessments of muscle mass, strength, and function, remains complex and difficult to access. This study aims to develop a convolutional neural network model based on ultrasound images to simplify the diagnostic process and promote its accessibility. This study prospectively evaluated 357 participants (101 with sarcopenia and 256 without sarcopenia) for training, encompassing three types of data: muscle ultrasound images, clinical information, and laboratory information. Three monomodal models based on each data type were developed in the training cohort. The data type with the best diagnostic performance was selected to develop the bimodal and multimodal model by adding another one or two data types. Subsequently, the diagnostic performance of the above models was compared. The contribution ratios of different data types were further analyzed for the multimodal model. A sensitivity analysis was performed by excluding 86 cases with missing values and retaining 271 complete cases for robustness validation. By comprehensive comparison, we finally identified the optimal model (SARCO model) as the convenient solution. Moreover, the SARCO model underwent an external validation with 145 participants (68 with sarcopenia and 77 without sarcopenia) and a proof-of-concept validation with 82 participants (19 with sarcopenia and 63 without sarcopenia) from two other hospitals. The monomodal model based on ultrasound images achieved the highest area under the receiver operator characteristic curve (AUC) of 0.827 and F1-score of 0.738 among the three monomodal models. Sensitivity analysis on complete data further confirmed the superiority of the ultrasound images model (AUC: 0.851; F1-score: 0.698). The performance of the multimodal model demonstrated statistical differences compared to the best monomodal model (AUC: 0.845 vs 0.827; P=.02) as well as the two bimodal models based on ultrasound images+clinical information (AUC: 0.845 vs 0.826; P=.03) and ultrasound images+laboratory information (AUC: 0.845 vs 0.832, P=0.035). On the other hand, ultrasound images contributed the most evidence for diagnosing sarcopenia (0.787) and nonsarcopenia (0.823) in the multimodal models. Sensitivity analysis showed consistent performance trends, with ultrasound images remaining the dominant contributor (Shapley additive explanation values: 0.810 for sarcopenia and 0.795 for nonsarcopenia). After comprehensive clinical analysis, the monomodal model based on ultrasound images was identified as the SARCO model. Subsequently, the SARCO model achieved satisfactory prediction performance in the external validation and proof-of-concept validation, with AUCs of 0.801 and 0.757 and F1-scores of 0.727 and 0.666, respectively. All three types of data contributed to sarcopenia diagnosis, while ultrasound images played a dominant role in model decision-making. The SARCO model based on ultrasound images is potentially the most convenient solution for diagnosing sarcopenia. Chinese Clinical Trial Registry ChiCTR2300073651; https://www.chictr.org.cn/showproj.html?proj=199199.

Machine Learning Approach to 3×4 Mueller Polarimetry for Complete Reconstruction of Diagnostic Polarimetric Images of Biological Tissues.

Chae S, Huang T, Rodriguez-Nunez O, Lucas T, Vanel JC, Vizet J, Pierangelo A, Piavchenko G, Genova T, Ajmal A, Ramella-Roman JC, Doronin A, Ma H, Novikova T

pubmed logopapersMay 6 2025
The translation of imaging Mueller polarimetry to clinical practice is often hindered by large footprint and relatively slow acquisition speed of the existing instruments. Using polarization-sensitive camera as a detector may reduce instrument dimensions and allow data streaming at video rate. However, only the first three rows of a complete 4×4 Mueller matrix can be measured. To overcome this hurdle we developed a machine learning approach using sequential neural network algorithm for the reconstruction of missing elements of a Mueller matrix from the measured elements of the first three rows. The algorithm was trained and tested on the dataset of polarimetric images of various excised human tissues (uterine cervix, colon, skin, brain) acquired with two different imaging Mueller polarimeters operating in either reflection (wide-field imaging system) or transmission (microscope) configurations at different wavelengths of 550 nm and 385 nm, respectively. Reconstruction performance was evaluated using various error metrics, all of which confirmed low error values. The reconstruction of full images of the fourth row of Mueller matrix with GPU parallelization and increasing batch size took less than 50 milliseconds. It suggests that a machine learning approach with parallel processing of all image pixels combined with the partial Mueller polarimeter operating at video rate can effectively substitute for the complete Mueller polarimeter and produce accurate maps of depolarization, linear retardance and orientation of the optical axis of biological tissues, which can be used for medical diagnosis in clinical settings.

Transfer learning‑based attenuation correction in <sup>99m</sup>Tc-TRODAT-1 SPECT for Parkinson's disease using realistic simulation and clinical data.

Huang W, Jiang H, Du Y, Wang H, Sun H, Hung GU, Mok GSP

pubmed logopapersMay 6 2025
Dopamine transporter (DAT) SPECT is an effective tool for early Parkinson's disease (PD) detection and heavily hampered by attenuation. Attenuation correction (AC) is the most important correction among other corrections. Transfer learning (TL) with fine-tuning (FT) a pre-trained model has shown potential in enhancing deep learning (DL)-based AC methods. In this study, we investigate leveraging realistic Monte Carlo (MC) simulation data to create a pre-trained model for TL-based AC (TLAC) to improve AC performance for DAT SPECT. A total number of 200 digital brain phantoms with realistic <sup>99m</sup>Tc-TRODAT-1 distribution was used to generate realistic noisy SPECT projections using MC SIMIND program and an analytical projector. One hundred real clinical <sup>99m</sup>Tc-TRODAT-1 brain SPECT data were also retrospectively analyzed. All projections were reconstructed with and without CT-based attenuation correction (CTAC/NAC). A 3D conditional generative adversarial network (cGAN) was pre-trained using 200 pairs of simulated NAC and CTAC SPECT data. Subsequently, 8, 24, and 80 pairs of clinical NAC and CTAC DAT SPECT data were employed to fine-tune the pre-trained U-Net generator of cGAN (TLAC-MC). Comparisons were made against without FT (DLAC-MC), training on purely limited clinical data (DLAC-CLI), clinical data with data augmentation (DLAC-AUG), mixed MC and clinical data (DLAC-MIX), TL using analytical simulation data (TLAC-ANA), and Chang's AC (ChangAC). All datasets used for DL-based methods were split to 7/8 for training and 1/8 for validation, and a 1-/2-/5-fold cross-validation were applied to test all 100 clinical datasets, depending on the numbers of clinical data used in the training model. With 8 available clinical datasets, TLAC-MC achieved the best result in Normalized Mean Squared Error (NMSE) and Structural Similarity Index Measure (SSIM) (TLAC-MC; NMSE = 0.0143 ± 0.0082/SSIM = 0.9355 ± 0.0203), followed by DLAC-AUG, DLAC-MIX, TLAC-ANA, DLAC-CLI, DLAC-MC, ChangAC and NAC. Similar trends exist when increasing the number of clinical datasets. For TL-based AC methods, the fewer clinical datasets available for FT, the greater the improvement as compared to DLAC-CLI using the same number of clinical datasets for training. Joint histograms analysis and Bland-Altman plots of SBR results also demonstrate consistent findings. TLAC is feasible for DAT SPECT with a pre-trained model generated purely based on simulation data. TLAC-MC demonstrates superior performance over other DL-based AC methods, particularly when limited clinical datasets are available. The closer the pre-training data is to the target domain, the better the performance of the TLAC model.

Molecular mechanisms explaining sex-specific functional connectivity changes in chronic insomnia disorder.

Yu L, Shen Z, Wei W, Dou Z, Luo Y, Hu D, Lin W, Zhao G, Hong X, Yu S

pubmed logopapersMay 6 2025
This study investigates the hypothesis that chronic insomnia disorder (CID) is characterized by sex-specific changes in resting-state functional connectivity (rsFC), with certain molecular mechanisms potentially influencing CID's pathophysiology by altering rsFC in relevant networks. Utilizing a resting-state functional magnetic resonance imaging (fMRI) dataset of 395 participants, including 199 CID patients and 196 healthy controls, we examined sex-specific rsFC effects, particularly in the default mode network (DMN) and five insomnia-genetically vulnerable regions of interest (ROIs). By integrating gene expression data from the Allen Human Brain Atlas, we identified genes linked to these sex-specific rsFC alterations and conducted enrichment analysis to uncover underlying molecular mechanisms. Additionally, we simulated the impact of sex differences in rsFC with different sex compositions in our dataset and employed machine learning classifiers to distinguish CID from healthy controls based on sex-specific rsFC data. We identified both shared and sex-specific rsFC changes in the DMN and the five genetically vulnerable ROIs, with gene expression variations associated with these sex-specific connectivity differences. Enrichment analysis highlighted genes involved in synaptic signaling, ion channels, and immune function as potential contributors to CID pathophysiology through their influence on connectivity. Furthermore, our findings demonstrate that different sex compositions significantly affect study outcomes and higher diagnostic performance in sex-specific rsFC data than combined sex. This study uncovered both shared and sex-specific connectivity alterations in CID, providing molecular insights into its pathophysiology and suggesting considering sex differences in future fMRI-based diagnostic and treatment strategies.
Page 167 of 1731730 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.