Sort by:
Page 146 of 3563559 results

XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.

Eidlin E, Hoogi A, Rozen H, Badarne M, Netanyahu NS

pubmed logopapersJul 25 2025
Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

CT-free kidney single-photon emission computed tomography for glomerular filtration rate.

Kwon K, Oh D, Kim JH, Yoo J, Lee WW

pubmed logopapersJul 25 2025
This study explores an artificial intelligence-based approach to perform CT-free quantitative SPECT for kidney imaging using Tc-99 m DTPA, aiming to estimate glomerular filtration rate (GFR) without relying on CT. A total of 1000 SPECT/CT scans were used to train and test a deep-learning model that segments kidneys automatically based on synthetic attenuation maps (µ-maps) derived from SPECT alone. The model employed a residual U-Net with edge attention and was optimized using windowing-maximum normalization and a generalized Dice similarity loss function. Performance evaluation showed strong agreement with manual CT-based segmentation, achieving a Dice score of 0.818 ± 0.056 and minimal volume differences of 17.9 ± 43.6 mL (mean ± standard deviation). An additional set of 50 scans confirmed that GFR calculated from the AI-based CT-free SPECT (109.3 ± 17.3 mL/min) was nearly identical to the conventional SPECT/CT method (109.2 ± 18.4 mL/min, p = 0.9396). This CT-free method reduced radiation exposure by up to 78.8% and shortened segmentation time from 40 min to under 1 min. The findings suggest that AI can effectively replace CT in kidney SPECT imaging, maintaining quantitative accuracy while improving safety and efficiency.

Automatic Prediction of TMJ Disc Displacement in CBCT Images Using Machine Learning.

Choi H, Jeon KJ, Lee C, Choi YJ, Jo GD, Han SS

pubmed logopapersJul 25 2025
Magnetic resonance imaging (MRI) is the gold standard for diagnosing disc displacement in temporomandibular joint (TMJ) disorders, but its high cost and practical challenges limit its accessibility. This study aimed to develop a machine learning (ML) model that can predict TMJ disc displacement using only cone-beam computed tomography (CBCT)-based radiomics features without MRI. CBCT images of 247 mandibular condyles from 134 patients who also underwent MRI scans were analyzed. To conduct three experiments based on the classification of various patient groups, we trained two ML models, random forest (RF) and extreme gradient boosting (XGBoost). Experiment 1 classified the data into three groups: Normal, disc displacement with reduction (DDWR), and disc displacement without reduction (DDWOR). Experiment 2 classified Normal versus disc displacement group (DDWR and DDWOR), and Experiment 3 classified Normal and DDWR versus DDWOR group. The RF model showed higher performance than XGBoost across all three experiments, and in particular, Experiment 3, which differentiated DDWOR from other conditions, achieved the highest accuracy with an area under the receiver operating characteristic curve (AUC) values of 0.86 (RF) and 0.85 (XGBoost). Experiment 2 followed with AUC values of 0.76 (RF) and 0.75 (XGBoost), while Experiment 1, which classified all three groups, had the lowest accuracy of 0.63 (RF) and 0.59 (XGBoost). The RF model, utilizing radiomics features from CBCT images, demonstrated potential as an assistant tool for predicting DDWOR, which requires the most careful management.

Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images.

Truong T, Baltruschat IM, Klemens M, Werner G, Lenga M

pubmed logopapersJul 25 2025
De-identification of medical images is a critical step to ensure privacy during data sharing in research and clinical settings. The initial step in this process involves detecting Protected Health Information (PHI), which can be found in image metadata or imprinted within image pixels. Despite the importance of such systems, there has been limited evaluation of existing AI-based solutions, creating barriers to the development of reliable and robust tools. In this study, we present an AI-based pipeline for PHI detection, comprising three key modules: text detection, text extraction, and text analysis. We benchmark three models-YOLOv11, EasyOCR, and GPT-4o- across different setups corresponding to these modules, evaluating their performance on two different datasets encompassing multiple imaging modalities and PHI categories. Our findings indicate that the optimal setup involves utilizing dedicated vision and language models for each module, which achieves a commendable balance in performance, latency, and cost associated with the usage of large language models (LLMs). Additionally, we show that the application of LLMs not only involves identifying PHI content but also enhances OCR tasks and facilitates an end-to-end PHI detection pipeline, showcasing promising outcomes through our analysis.

Multimodal prediction based on ultrasound for response to neoadjuvant chemotherapy in triple negative breast cancer.

Lyu M, Yi S, Li C, Xie Y, Liu Y, Xu Z, Wei Z, Lin H, Zheng Y, Huang C, Lin X, Liu Z, Pei S, Huang B, Shi Z

pubmed logopapersJul 25 2025
Pathological complete response (pCR) can guide surgical strategy and postoperative treatments in triple-negative breast cancer (TNBC). In this study, we developed a Breast Cancer Response Prediction (BCRP) model to predict the pCR in patients with TNBC. The BCRP model integrated multi-dimensional longitudinal quantitative imaging features, clinical factors and features from the Breast Imaging Data and Reporting System (BI-RADS). Multi-dimensional longitudinal quantitative imaging features, including deep learning features and radiomics features, were extracted from multiview B-mode and colour Doppler ultrasound images before and after treatment. The BCRP model achieved the areas under the receiver operating curves (AUCs) of 0.94 [95% confidence interval (CI), 0.91-0.98] and 0.84 [95%CI, 0.75-0.92] in the training and external test cohorts, respectively. Additionally, the low BCRP score was an independent risk factor for event-free survival (P < 0.05). The BCRP model showed a promising ability in predicting response to neoadjuvant chemotherapy in TNBC, and could provide valuable information for survival.

Automated characterization of abdominal MRI exams using deep learning.

Kim J, Chae A, Duda J, Borthakur A, Rader DJ, Gee JC, Kahn CE, Witschey WR, Sagreiya H

pubmed logopapersJul 25 2025
Advances in magnetic resonance imaging (MRI) have revolutionized disease detection and treatment planning. However, the growing volume and complexity of MRI data-along with heterogeneity in imaging protocols, scanner technology, and labeling practices-creates a need for standardized tools to automatically identify and characterize key imaging attributes. Such tools are essential for large-scale, multi-institutional studies that rely on harmonized data to train robust machine learning models. In this study, we developed convolutional neural networks (CNNs) to automatically classify three core attributes of abdominal MRI: pulse sequence type, imaging orientation, and contrast enhancement status. Three distinct CNNs with similar backbone architectures were trained to classify single image slices into one of 12 pulse sequences, 4 orientations, or 2 contrast classes. The models achieved high classification accuracies of 99.51%, 99.87%, and 99.99% for pulse sequence, orientation, and contrast, respectively. We applied Grad-CAM to visualize image regions influencing pulse sequence predictions and highlight relevant anatomical features. To enhance performance, we implemented a majority voting approach to aggregate slice-level predictions, achieving 100% accuracy at the volume level for all tasks. External validation using the Duke Liver Dataset demonstrated strong generalizability; after adjusting for class label mismatch, volume-level accuracies exceeded 96.9% across all classification tasks.

Enhancing the Characterization of Dural Tears on Photon Counting CT Myelography: An Analysis of Reconstruction Techniques.

Madhavan AA, Kranz PG, Kodet ML, Yu L, Zhou Z, Amrhein TJ

pubmed logopapersJul 25 2025
Photon counting detector CT myelography is an effective modality for the localization of spinal CSF leaks. The initial studies describing this technique employed a relatively smooth Br56 kernel. However, subsequent studies have demonstrated that the use of the sharpest quantitative kernel on photon counting CT (Qr89), particularly when denoised with techniques such as quantum iterative reconstruction or convolutional neural networks, enhances detection of CSF-venous fistulas. In this clinical report, we sought to determine whether the Qr89 kernel has utility in patients with dural tears, the other main type of spinal CSF leak. We performed a retrospective review of patients with dural tears diagnosed on photon counting CT myelography, comparing Br56, Qr89 denoised with quantum iterative reconstruction, and Qr89 denoised with a trained convolutional neural network. We specifically assessed spatial resolution, noise level, and diagnostic confidence in eight such cases, finding that the sharper Qr89 kernel outperformed the smoother Br56 kernel. This was particularly true when Qr89 was denoised using a convolutional neural network. Furthermore, in two cases, the dural tear was only seen on the Qr89 reconstructions and missed on the Br56 kernel. Overall, our study demonstrates the potential value of further optimizing post-processing techniques for photon counting CT myelography aimed at localizing dural tears.ABBREVIATIONS: CNN = convolutional neural network; CVF = CSF-venous fistula; DSM = digital subtraction myelography; EID = energy integrating detector; PCD = photon counting detector; QIR = quantum iterative reconstruction.

Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.

Lu C, Zhang J, Liu R

pubmed logopapersJul 25 2025
The integration of pathology and radiology in medical imaging has emerged as a critical need for advancing diagnostic accuracy and improving clinical workflows. Current AI-driven approaches for medical image analysis, despite significant progress, face several challenges, including handling multi-modal imaging, imbalanced datasets, and the lack of robust interpretability and uncertainty quantification. These limitations often hinder the deployment of AI systems in real-world clinical settings, where reliability and adaptability are essential. To address these issues, this study introduces a novel framework, the Domain-Informed Adaptive Network (DIANet), combined with an Adaptive Clinical Workflow Integration (ACWI) strategy. DIANet leverages multi-scale feature extraction, domain-specific priors, and Bayesian uncertainty modeling to enhance interpretability and robustness. The proposed model is tailored for multi-modal medical imaging tasks, integrating adaptive learning mechanisms to mitigate domain shifts and imbalanced datasets. Complementing the model, the ACWI strategy ensures seamless deployment through explainable AI (XAI) techniques, uncertainty-aware decision support, and modular workflow integration compatible with clinical systems like PACS. Experimental results demonstrate significant improvements in diagnostic accuracy, segmentation precision, and reconstruction fidelity across diverse imaging modalities, validating the potential of this framework to bridge the gap between AI innovation and clinical utility.

A DCT-UNet-based framework for pulmonary airway segmentation integrating label self-updating and terminal region growing.

Zhao S, Wu Y, Xu J, Li M, Feng J, Xia S, Chen R, Liang Z, Qian W, Qi S

pubmed logopapersJul 25 2025
&#xD;Intrathoracic airway segmentation in computed tomography (CT) is important for quantitative and qualitative analysis of various chronic respiratory diseases and bronchial surgery navigation. However, the airway tree's morphological complexity, incomplete labels resulting from annotation difficulty, and intra-class imbalance between main and terminal airways limit the segmentation performance.&#xD;Methods:&#xD;Three methodological improvements are proposed to deal with the challenges. Firstly, we design a DCT-UNet to collect better information on neighbouring voxels and ones within a larger spatial region. Secondly, an airway label self-updating (ALSU) strategy is proposed to iteratively update the reference labels to conquer the problem of incomplete labels. Thirdly, a deep learning-based terminal region growing (TRG) is adopted to extract terminal airways. Extensive experiments were conducted on two internal datasets and three public datasets.&#xD;Results:&#xD;Compared to the counterparts, the proposed method can achieve a higher Branch Detected, Tree-length Detected, Branch Ratio, and Tree-length Ratio (ISICDM2021 dataset, 95.19%, 94.89%, 166.45%, and 172.29%; BAS dataset, 96.03%, 95.11%, 129.35%, and 137.00%). Ablation experiments show the effectiveness of three proposed solutions. Our method is applied to an in-house Chorionic Obstructive Pulmonary Disease (COPD) dataset. The measures of branch count, tree length, endpoint count, airway volume, and airway surface area are significantly different between COPD severity stages.&#xD;Conclusions:&#xD;The proposed methods can segment more terminal bronchi and larger length of airway, even some bronchi which are real but missed in the manual annotation can be detected. Potential application significance has been presented in characterizing COPD airway lesions and severity stages.&#xD.
Page 146 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.