Sort by:
Page 74 of 3993982 results

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

Agentic AI in radiology: Emerging Potential and Unresolved Challenges.

Dietrich N

pubmed logopapersJul 24 2025
This commentary introduces agentic artificial intelligence (AI) as an emerging paradigm in radiology, marking a shift from passive, user-triggered tools to systems capable of autonomous workflow management, task planning, and clinical decision support. Agentic AI models may dynamically prioritize imaging studies, tailor recommendations based on patient history and scan context, and automate administrative follow-up tasks, offering potential gains in efficiency, triage accuracy, and cognitive support. While not yet widely implemented, early pilot studies and proof-of-concept applications highlight promising utility across high-volume and high-acuity settings. Key barriers, including limited clinical validation, evolving regulatory frameworks, and integration challenges, must be addressed to ensure safe, scalable deployment. Agentic AI represents a forward-looking evolution in radiology that warrants careful development and clinician-guided implementation.

SUP-Net: Slow-time Upsampling Network for Aliasing Removal in Doppler Ultrasound.

Nahas H, Yu ACH

pubmed logopapersJul 24 2025
Doppler ultrasound modalities, which include spectral Doppler and color flow imaging, are frequently used tools for flow diagnostics because of their real-time point-of-care applicability and high temporal resolution. When implemented using pulse-echo sensing and phase shift estimation principles, this modality's pulse repetition frequency (PRF) is known to influence the maximum detectable velocity. If the PRF is inevitably set below the Nyquist limit due to imaging requirements or hardware constraints, aliasing errors or spectral overlap may corrupt the estimated flow data. To solve this issue, we have devised a deep learning-based framework, powered by a custom slow-time upsampling network (SUP-Net) that leverages spatiotemporal characteristics to upsample the received ultrasound signals across pulse echoes acquired using high-frame-rate ultrasound (HiFRUS). Our framework infers high-PRF signals from signals acquired at low PRF, thereby improving Doppler ultrasound's flow estimation quality. SUP-Net was trained and evaluated on in vivo femoral acquisitions from 20 participants and was applied recursively to resolve scenarios with excessive aliasing across a range of PRFs. We report the successful reconstruction of slow-time signals with frequency content that exceeds the Nyquist limit once and twice. By operating on the fundamental slow-time signals, our framework can resolve aliasing-related artifacts in several downstream modalities, including color Doppler and pulse wave Doppler.

Vox-MMSD: Voxel-wise Multi-scale and Multi-modal Self-Distillation for Self-supervised Brain Tumor Segmentation.

Zhou Y, Wu J, Fu J, Yue Q, Liao W, Zhang S, Zhang S, Wang G

pubmed logopapersJul 24 2025
Many deep learning methods have been proposed for brain tumor segmentation from multi-modal Magnetic Resonance Imaging (MRI) scans that are important for accurate diagnosis and treatment planning. However, supervised learning needs a large amount of labeled data to perform well, where the time-consuming and expensive annotation process or small size of training set will limit the model's performance. To deal with these problems, self-supervised pre-training is an appealing solution due to its feature learning ability from a set of unlabeled images that is transferable to downstream datasets with a small size. However, existing methods often overlook the utilization of multi-modal information and multi-scale features. Therefore, we propose a novel Self-Supervised Learning (SSL) framework that fully leverages multi-modal MRI scans to extract modality-invariant features for brain tumor segmentation. First, we employ a Siamese Block-wise Modality Masking (SiaBloMM) strategy that creates more diverse model inputs for image restoration to simultaneously learn contextual and modality-invariant features. Meanwhile, we proposed Overlapping Random Modality Sampling (ORMS) to sample voxel pairs with multi-scale features for self-distillation, enhancing voxel-wise representation which is important for segmentation tasks. Experiments on the BraTS 2024 adult glioma segmentation dataset showed that with a small amount of labeled data for fine-tuning, our method improved the average Dice by 3.80 percentage points. In addition, when transferred to three other small downstream datasets with brain tumors from different patient groups, our method also improved the dice by 3.47 percentage points on average, and outperformed several existing SSL methods. The code is availiable at https://github.com/HiLab-git/Vox-MMSD.

Deep Learning-Driven High Spatial Resolution Attenuation Imaging for Ultrasound Tomography (AI-UT).

Liu M, Kou Z, Wiskin JW, Czarnota GJ, Oelze ML

pubmed logopapersJul 24 2025
Ultrasonic attenuation can be used to characterize tissue properties of the human breast. Both quantitative ultrasound (QUS) and ultrasound tomography (USCT) can provide attenuation estimation. However, limitations have been identified for both approaches. In QUS, the generation of attenuation maps involves separating the whole image into different data blocks. The optimal size of the data block is around 15 to 30 pulse lengths, which dramatically decreases the spatial resolution for attenuation imaging. In USCT, the attenuation is often estimated with a full wave inversion (FWI) method, which is affected by background noise. In order to achieve a high resolution attenuation image with low variance, a deep learning (DL) based method was proposed. In the approach, RF data from 60 angle views from the QTI Breast Acoustic CT<sup>TM</sup> Scanner were acquired as the input and attenuation images as the output. To improve image quality for the DL method, the spatial correlation between speed of sound (SOS) and attenuation were used as a constraint in the model. The results indicated that by including the SOS structural information, the performance of the model was improved. With a higher spatial resolution attenuation image, further segmentation of the breast can be achieved. The structural information and actual attenuation values provided by DL-generated attenuation images were validated with the values from the literature and the SOS-based segmentation map. The information provided by DL-generated attenuation images can be used as an additional biomarker for breast cancer diagnosis.

Minimal Ablative Margin Quantification Using Hepatic Arterial Versus Portal Venous Phase CT for Colorectal Metastases Segmentation: A Dual-center, Retrospective Analysis.

Siddiqi NS, Lin YM, Marques Silva JA, Laimer G, Schullian P, Scharll Y, Dunker AM, O'Connor CS, Jones KA, Brock KK, Bale R, Odisio BC, Paolucci I

pubmed logopapersJul 24 2025
To compare the predictive value of minimal ablative margin (MAM) quantification using tumor segmentation on intraprocedural contrast-enhanced hepatic arterial (HAP) versus portal venous phase (PVP) CT on local outcomes following percutaneous thermal ablation of colorectal liver metastases (CRLM). This dual-center retrospective study included patients undergoing thermal ablation of CRLM with intraprocedural preablation and postablation contrast-enhanced CT imaging between 2009 and 2021. Tumors were segmented in both HAP and PVP CT phases using an artificial intelligence-based auto-segmentation model and reviewed by a trained radiologist. The MAM was quantified using a biomechanical deformable image registration process. The area under the receiver operating characteristic curve (AUROC) was used to compare the prognostic value for predicting local tumor progression (LTP). Among 81 patients (60 y±13, 53 men), 151 CRLMs were included. During 29.4 months of median follow-up, LTP was noted in 24/151 (15.9%). Median tumor volumes on HAP and PVP CT were 1.7 mL and 1.2 mL, respectively, with respective median MAMs of 2.3 and 4.0 mm (both P< 0.001). The AUROC for 1-year LTP prediction was 0.78 (95% CI: 0.70-0.85) on HAP and 0.84 (95% CI: 0.78-0.91) on PVP (P= 0.002). During CT-guided percutaneous thermal ablation, MAM measured based on tumors segmented on PVP images conferred a higher predictive accuracy of ablation outcomes among CRLM patients than those segmented on HAP images, supporting the use of PVP rather than HAP images for segmentation during ablation of CRLMs.

An approach for cancer outcomes modelling using a comprehensive synthetic dataset.

Tu L, Choi HHF, Clark H, Lloyd SAM

pubmed logopapersJul 24 2025
Limited patient data availability presents a challenge for efficient machine learning (ML) model development. Recent studies have proposed methods to generate synthetic medical images but lack the corresponding prognostic information required for predicting outcomes. We present a cancer outcomes modelling approach that involves generating a comprehensive synthetic dataset which can accurately mimic a real dataset. A real public dataset containing computed tomography-based radiomic features and clinical information for 132 non-small cell lung cancer patients was used. A synthetic dataset of virtual patients was synthesized using a conditional tabular generative adversarial network. Models to predict two-year overall survival were trained on real or synthetic data using combinations of four feature selection methods (mutual information, ANOVA F-test, recursive feature elimination, random forest (RF) importance weights) and six ML algorithms (RF, k-nearest neighbours, logistic regression, support vector machine, XGBoost, Gaussian Naïve Bayes). Models were tested on withheld real data and externally validated. Real and synthetic datasets were similar, with an average one minus Kolmogorov-Smirnov test statistic of 0.871 for continuous features. Chi-square test confirmed agreement for discrete features (p < 0.001). XGBoost using RF importance-based features performed the most consistently for both datasets, with percent differences in balanced accuracy and area under the precision-recall curve of < 1.3%. Preliminary findings demonstrate the potential application of synthetic radiomic and clinical data augmentation for cancer outcomes modelling, although further validation with larger diverse datasets is crucial. While our approach was described in a lung context, it may be applied to other sites or endpoints.

MSA-Net: a multi-scale and adversarial learning network for segmenting bone metastases in low-resolution SPECT imaging.

Wu Y, Lin Q, He Y, Zeng X, Cao Y, Man Z, Liu C, Hao Y, Cai Z, Ji J, Huang X

pubmed logopapersJul 24 2025
Single-photon emission computed tomography (SPECT) plays a crucial role in detecting bone metastases from lung cancer. However, its low spatial resolution and lesion similarity to benign structures present significant challenges for accurate segmentation, especially for lesions of varying sizes. We propose a deep learning-based segmentation framework that integrates conditional adversarial learning with a multi-scale feature extraction generator. The generator employs cascade dilated convolutions, multi-scale modules, and deep supervision, while the discriminator utilizes multi-scale L1 loss computed on image-mask pairs to guide segmentation learning. The proposed model was evaluated on a dataset of 286 clinically annotated SPECT scintigrams. It achieved a Dice Similarity Coefficient (DSC) of 0.6671, precision of 0.7228, and recall of 0.6196 - outperforming both classical and recent adversarial segmentation models in multi-scale lesion detection, especially for small and clustered lesions. Our results demonstrate that the integration of multi-scale feature learning with adversarial supervision significantly improves the segmentation of bone metastasis in SPECT imaging. This approach shows potential for clinical decision support in the management of lung cancer.

Deep Learning to Differentiate Parkinsonian Syndromes Using Multimodal Magnetic Resonance Imaging: A Proof-of-Concept Study.

Mattia GM, Chougar L, Foubert-Samier A, Meissner WG, Fabbri M, Pavy-Le Traon A, Rascol O, Grabli D, Degos B, Pyatigorskaya N, Faucher A, Vidailhet M, Corvol JC, Lehéricy S, Péran P

pubmed logopapersJul 24 2025
The differentiation between multiple system atrophy (MSA) and Parkinson's disease (PD) based on clinical diagnostic criteria can be challenging, especially at an early stage. Leveraging deep learning methods and magnetic resonance imaging (MRI) data has shown great potential in aiding automatic diagnosis. The aim was to determine the feasibility of a three-dimensional convolutional neural network (3D CNN)-based approach using multimodal, multicentric MRI data for differentiating MSA and its variants from PD. MRI data were retrospectively collected from three MSA French reference centers. We computed quantitative maps of gray matter density (GD) from a T1-weighted sequence and mean diffusivity (MD) from diffusion tensor imaging. These maps were used as input to a 3D CNN, either individually ("monomodal," "GD" or "MD") or in combination ("bimodal," "GD-MD"). Classification tasks included the differentiation of PD and MSA patients. Model interpretability was investigated by analyzing misclassified patients and providing a visual interpretation of the most activated regions in CNN predictions. The study population included 92 patients with MSA (50 with MSA-P, parkinsonian variant; 33 with MSA-C, cerebellar variant; 9 with MSA-PC, mixed variant) and 64 with PD. The best accuracies were obtained for the PD/MSA (0.88 ± 0.03 with GD-MD), PD/MSA-C&PC (0.84 ± 0.08 with MD), and PD/MSA-P (0.78 ± 0.09 with GD) tasks. Patients misclassified by the CNN exhibited fewer and milder image alterations, as found using an image-based z score analysis. Activation maps highlighted regions involved in MSA pathophysiology, namely the putamen and cerebellum. Our findings hold promise for developing an efficient, MRI-based, and user-independent diagnostic tool suitable for differentiating parkinsonian syndromes in clinical practice. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.
Page 74 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.