Sort by:
Page 17 of 6046038 results

Rau S, Fink A, Strecker R, Nickel MD, Michel LJ, Sacalean V, Kästingschäfer KF, Klemm D, Rau A, Bamberg F, Weiss J, Russe MF

pubmed logopapersOct 17 2025
To evaluate the impact of accelerated, deep learning-based reconstructed T1-weighted VIBE Dixon images on fat-signal fraction (FSF) quantification compared with standard protocols. In this prospective single-center study, patients undergoing clinically indicated abdominal MRI underwent 3 T1-weighted VIBE acquisitions on a 1.5 T scanner: a standard sequence and 2 accelerated sequences ("fast" and "ultra-fast"). The accelerated scans employed higher CAIPIRINHA parallel imaging factors, partial Fourier sampling, and deep learning-based image reconstruction. Subsequently, whole-liver FSF was determined using a validated automated liver segmentation tool for in-phase and opposed-phase reconstructions. The quality of segmentation was assessed visually and by comparing liver volumes. Statistical analyses included calculation of mean absolute error and Spearman's correlation for FSF agreement. Between March 2025 and May 2025, 60 patients (mean age, 63.7 ± 13.9 y; 55% females) were enrolled. Acquisition times were 15 seconds for the standard sequence and 10 and 6 seconds for fast and ultra-fast sequences, respectively. The whole liver segmentations from the fast and ultra-fast sequences showed high correlations (ρ > 0.975, both P < 0.001) with minimal mean absolute error of 1.1% and 1.5% from the standard sequence. The liver fat quantification showed high concordance across protocols, too: median FSF was 2.3% (standard), 2.6% (fast), and 2.4% (ultra-fast), with a mean absolute error <0.6% from standard for both accelerated protocols (all ρ > 0.92, P < 0.001). Liver fat quantification using highly accelerated, deep learning-enhanced MRI sequences enables reliable assessment of liver fat content with a significant reduction in scan time in low fat-fraction ranges.

Wen Y, Mahmoud MA, Wu W, Chen H, Zhang Y, Pan X, Guan Y

pubmed logopapersOct 17 2025
To explore the diagnostic value of a multi-classification model based on deep learning in distinguishing the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pure ground-glass nodules (pGGN) on CT. A total of 590 cases of pGGN confirmed by pathology as lung adenocarcinoma or glandular prodromal lesions were collected retrospectively, of which 462 cases of pGGN were used as training and testing set, and 128 cases of pGGN as external verification set. The research is based on the Swin Transformer network and uses a five-fold cross-validation method to train the model. The diagnostic efficacy of deep learning model and radiologist on the external verification set was compared. The classification efficiency of the model is evaluated by confusion matrix, accuracy, precision and F1-score. The accuracy of the training and testing sets of the deep learning model is 95.21% and 91.41% respectively, and the integration accuracy is 94.65%. The accuracy, precision and recall rate of the optimal model are 87.01%, 87.57% and 87.01% respectively, and the F1-score is 87.09%. In the external verification set, the accuracy of the model is 91.41%, and the F1-score is 91.42%. The classification efficiency of the deep learning model is better than that of radiologists. The multi-classification model based on deep learning has a good ability to predict the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pGGN, and its classification efficiency is better than that of radiologists, which can improve the diagnostic accuracy of pulmonary pGGN. Swin Transformer deep learning models can noninvasively predict the pathological subtypes of pGGN, which can be used as a preoperative auxiliary diagnostic tool to improve the diagnostic accuracy of pGGN, thereby optimizing the prognosis of patients. The Swin Transformer model can predict the pathological subtype of pure ground-glass nodules. Compared with the performance of radiologists, the deep learning model performs better. Swin Transformer model can be used as a tool for preoperative diagnosis.

Padda I, Sebastian SA, Sethi Y, Sineri C

pubmed logopapersOct 17 2025
Chronic Total Occlusions (CTOs) remain among the most complex lesions encountered in percutaneous coronary intervention (PCI), presenting significant technical and clinical challenges due to ambiguous vessel anatomy, lesion heterogeneity, and high operator variability. Although recent advancements in interventional techniques have improved success rates, procedural outcomes remain variable. The integration of Artificial Intelligence (AI) into CTO management offers the potential to optimize each stage of care, including lesion assessment, procedural planning, real-time intra-procedural support, and post-procedural outcome prediction. This review synthesizes current evidence on AI applications across the CTO care continuum, highlighting the role of deep learning in imaging modalities such as optical coherence tomography (OCT) and coronary computed tomography angiography (CCTA), as well as machine learning models such as XGBoost for procedural strategy and outcome forecasting. Commercial platforms including Ultreon OCT and HeartFlow FFRCT demonstrate early translational value, although validation in CTO-specific contexts remains limited. Ethical considerations such as algorithmic transparency, data generalizability, and clinician trust are also addressed, with attention to explainable AI methods such as SHAP and LIME. As AI technologies continue to advance, future research should prioritize the development of interpretable, clinically validated models and encourage multidisciplinary collaboration to support ethical integration into interventional cardiology and improve patient outcomes.

Li S, Wu H, Guo L, Wang X, Shu G, Li X, Sun SK

pubmed logopapersOct 17 2025
Achieving high resolution while minimizing contrast agent dosage remains a key goal, yet a major challenge in contrast-enhanced computed tomography (CT) imaging. Herein, we propose an artificial intelligence-assisted low-dose high atomic number contrast agent for ultrahigh-resolution CT imaging. As a proof of concept, high-quality PEGylated hafnium oxide nanoparticles (DA-HfO<sub>2</sub> NPs) are synthesized, exhibiting superior X-ray attenuation, high hafnium content (36%), excellent water solubility, appropriate hydrodynamic size (13.5 nm), and prolonged circulation half-life (161.9 min). High-dose DA-HfO<sub>2</sub> NPs enable extended ultrahigh-resolution vascular imaging with a spatial resolution of 0.15 mm and a time window of at least 60 min. More importantly, by integrating artificial intelligence, the low-dose contrast agent (at 25% of the standard dose) achieves imaging quality comparable to that of the high-dose agent in both contrast density and spatial resolution, while simultaneously enhancing biosafety. This strategy enables high-resolution imaging at reduced contrast agent doses and offers a promising approach for sensitive and safe CT angiography.

Lotfimarangloo S, Rahman ST, Park MA, Fei B, Badawi RD, Bowen SL

pubmed logopapersOct 17 2025
PET data-driven attenuation correction (AC) methods, including deep learning, are attractive options for quantitative brain imaging on CT-less brain PET systems and low-dose PET/CT. However, current schemes have performance and practical limitations. We previously developed a CT-less transmission-aided AC that combines coincidences from a weak positron source, and the patient, to estimate attenuation with physics alone. In this work, we aim to optimize and assess this new AC method during human [<sup>18</sup>F]FDG neuroimaging on whole-body PET/CT. Our approach, TRansmission-aided μ-map reconstruction (TRU) AC, includes 1) a low-profile and physically fixed transmission source filled with ~ 14 MBq of <sup>18</sup>F, 2) a modified maximum likelihood reconstruction of attenuation and activity algorithm, and 3) scatter corrections using the exam data alone. We imaged N = 5 patients with the transmission source, immediately after their clinical [<sup>18</sup>F]FDG PET/CT. The clinically-consistent protocol included a CT and 10-minute brain-focused PET exam. Using this 10 minutes of patient PET data alone, radiotracer images were reconstructed with the vendor's algorithm and TRU-AC or CT-AC (reference standard), with all else matched. For quantitative analysis, we placed brain-structure volumes of interest with an atlas, and computed error in mean standardized uptake values of TRU-AC relative to CT-AC. TRU-AC PET showed qualitatively strong agreement with CT-AC. For the VOI analysis, absolute relative error in standardized uptake values for TRU-AC was within 3.6%, across all brain structures and patients. Normalized root mean square error of activity bias for TRU-AC was 1.8%, and voxel-wise noise in the cerebellum showed a very minor increase of 0.2%. Bland-Altman analysis demonstrated that TRU-AC and CT-AC have statistically significant agreement, assuming a maximum allowed difference of ± 5%. TRU-AC enables quantitative PET for human neuroimaging. This approach may particularly benefit exams where deep learning-based AC schemes show reduced performance, including those focused on radiotracer development, new patient cohorts, and/or pathologies that often lack sufficient training data.

da Silva-Filho JE, de Aguiar AWO, Silva CM, de Albuquerque DF, Gurgel-Filho ED

pubmed logopapersOct 17 2025
Artificial intelligence (AI) is rapidly transforming diagnostic imaging, raising important questions about its role as a collaborative tool or a potential replacement for human expertise. This rapid communication reviews current evidence on AI applications in diagnostic imaging, focusing on clinical, ethical, and legal challenges. Although AI models show promise in detecting abnormalities and optimizing workflows, many remain limited by narrow training datasets and lack external validation. Ethical issues such as algorithm transparency, bias, and accountability are discussed, alongside the financial and practical implications of integrating AI tools into clinical practice, highlighting the need for clear guidelines and regulatory oversight. Radiologists continue to play a crucial role in interpreting images and validating AI outputs to avoid diagnostic errors, while the potential risks of overreliance on AI, including erosion of diagnostic skills among clinicians, are also emphasized. This communication advocates for responsible AI implementation that supports, rather than replaces, the expertise and judgment of healthcare professionals.

Meau-Petit V, Mottez C, Bhojnagarwala B, Montasser M, Singh Y, Loganathan PK

pubmed logopapersOct 17 2025
Six-region lung ultrasound (LUS) scores show good predictive value for predicting surfactant need in preterm infants but rely on a fixed threshold, which may lead to misclassification near the cut-off and lack data-driven justification for selecting these 6 regions. This study explored whether evaluating individual regions-and combinations-could improve predictive accuracy and utility. Data from preterm infants born at ≤34 weeks and enrolled in the Serial Lung Ultrasound for Surfactant Replacement Therapy (SLURP) cohort study were analyzed to develop predictive models for surfactant administration based on regional LUS scores. Univariate, bivariate, and machine learning analyses were conducted to identify the most informative lung regions. Rule-based, decision tree, and logistic regression models were then developed, compared to the 6-region model, and validated on an external dataset. The training set consisted of 77 patients from the SLURP cohort study. The rule-based, decision tree, and logistic regression models showed the best performance, primarily using 2 lung regions-left lateral and left upper posterior. A refined model that included the right upper anterior (RUA) region further improved performance. On the external test set (n = 42), the rule-based model with RUA achieved the highest accuracy (0.93) and the lowest false negative rate (0.11), outperforming the 6-region model. Adding more regions did not enhance accuracy. A simplified, rule-based model that accounts for the differential predictive value of individual lung regions may enhance the accuracy of LUS-based prediction of surfactant need in preterm infants. It is also more accessible, effective, and time-efficient for clinicians.

Lu D, Qin C, Wang LF, Li LL, Li Y, Sun LP, Shi H, Zhou BY, Guan X, Miao Y, Han H, Zhou JH, Xu HX, Zhao CK

pubmed logopapersOct 17 2025
This study aimed to develop and validate an interpretable radiomics model using quantitative features from B-mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) for predicting macrotrabecular-massive (MTM) hepatocellular carcinoma (HCC). From October 2020 to September 2023, 344 patients (mean age: 58.20 ± 10.70 years; 275 men) with surgically resected HCC were retrospectively enrolled from three medical centers. Radiomics features were extracted from BMUS and CEUS, followed by a multiple-step feature selection process. BMUS<sub>R</sub> model (based on BMUS radiomics features), BM + CEUS<sub>R</sub> model (based on BMUS and CEUS radiomics features) and hybrid<sub>R+C</sub> model (integrated clinical indicators and radiomic features) were established. These radiomics models' performance was compared with conventional clinic-radiological (C<sub>C+R</sub>) model using area under the receiver operating characteristic curve (AUC). SHapley Additive exPlanations (SHAP) method was used to interpret model performance. The model's potential for predicting recurrence-free survival (RFS) was further analyzed. Among ten distinct machine learning classifiers evaluated, the AdaBoost algorithm demonstrated the highest classification performance. The AUCs of the BM + CEUS<sub>R</sub> model for identifying MTM-HCC were higher than the BMUS<sub>R</sub> model and the conventional clinic-radiological model in both validation (0.880 vs. 0.720 and 0.658, both p < 0.05) and test sets (0.878 vs. 0.605 and 0.594, both p < 0.05). No statistical differences were observed between the BM + CEUS<sub>R</sub> model and the hybrid<sub>R+C</sub> model in either set (p > 0.05). Additionally, the AdaBoost-based BM + CEUS<sub>R</sub> model showed promising in stratifying early recurrence-free survival, with p < 0.001. The AdaBoost-based BM + CEUS<sub>R</sub> model shows promise as a tool for preoperatively identifying MTM-HCC and may also be beneficial in predicting prognosis.

Fu J, Cong P, Zeng T, Hou X, Zhao B, Liu X, Sun Y

pubmed logopapersOct 17 2025
BackgroundNon-destructive testing (NDT) is crucial for the preservation and restoration of ancient wooden structures, with Computed Tomography (CT) increasingly utilized in this field. However, practical CT examinations of these structures-often characterized by complex configurations, large dimensions, and on-site constraints-frequently encounter difficulties in acquiring full-angle projection data. Consequently, images reconstructed under limited-angle conditions suffer from poor quality and severe artifacts, hindering accurate assessment of critical internal features such as mortise-tenon joints and incipient damage.ObjectiveThis study aims to develop a novel algorithm capable of achieving high-quality image reconstruction from incomplete, limited-angle projection data.MethodsWe propose CADRE (Contour-guided Alternating Direction Method of Multipliers-optimized Deep Radon Enhancement), an unsupervised deep learning reconstruction framework. CADRE innovatively integrates the ADMM optimization strategy, the learning paradigm of Deep Radon Prior (DRP) networks, and a geometric contour-guidance mechanism. This approach synergistically enhances reconstruction performance by iteratively optimizing network parameters and input images, without requiring large-scale paired training data, rendering it particularly suitable for cultural heritage applications.ResultsSystematic validation using both a digital <i>dougong</i> simulation model of the Yingxian Wooden Pagoda and a physical wooden <i>dougong</i> model from Foguang Temple demonstrates that, under typical 90° and 120° limited-angle conditions, the CADRE algorithm significantly outperforms traditional FBP, iterative reconstruction algorithms SART and ADMM-TV, and other representative unsupervised deep learning methods (Deep Image Prior, DIP; Residual Back-Projection with DIP, RBP-DIP; DRP). This superiority is evident in quantitative metrics such as PSNR and SSIM, as well as in visual quality, including artifact suppression and preservation of structural details. CADRE exhibits exceptional capability in accurately reproducing internal mortise-tenon configurations and fine features within ancient timber.ConclusionThe CADRE algorithm provides a robust and efficient solution for limited-angle CT image reconstruction of ancient wooden structures. It effectively overcomes the limitations of existing methods in handling incomplete data, significantly enhances the quality of reconstructed images and the characterization of internal fine structures, and offers strong technical support for the scientific understanding, condition assessment, and precise conservation of cultural heritage, thereby holding substantial academic value and promising application prospects.

Oettl FC, Pruneski J, Zsidai B, Yu Y, Cong T, Feldt R, Winkler PW, Hirschmann MT, Samuelsson K

pubmed logopapersOct 17 2025
Artificial intelligence (AI) is increasingly used in orthopaedics, yet current models are often limited to narrow, isolated tasks like analysing an X-ray or predicting a single outcome. This paper introduces AI agents-a new class of AI systems designed to overcome these limitations. Unlike traditional AI, agents can autonomously manage complex, multistep processes that mirror the complete patient journey. They can coordinate tasks from initial diagnosis and surgical scheduling to postoperative monitoring and rehabilitation, acting as intelligent assistants for clinical teams. This review explains what distinguishes AI agents from conventional AI, explores their potential applications in orthopaedic practice-including perioperative workflow optimisation, research acceleration and intelligent physician support-and discusses the significant implementation and ethical challenges that must be addressed. For the orthopaedic surgeon, understanding AI agents is becoming essential, as these systems offer a transformative potential to enhance efficiency, improve patient outcomes and shape the future of clinical leadership in a technologically advancing field. LEVEL OF EVIDENCE: Level V.
Page 17 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.