Sort by:
Page 55 of 94933 results

Ultrasound Radio Frequency Time Series for Tissue Typing: Experiments on In-Vivo Breast Samples Using Texture-Optimized Features and Multi-Origin Method of Classification (MOMC).

Arab M, Fallah A, Rashidi S, Dastjerdi MM, Ahmadinejad N

pubmed logopapersJun 30 2025
One of the most promising auxiliaries for screening breast cancer (BC) is ultrasound (US) radio-frequency (RF) time series. It has the superiority of not requiring any supplementary equipment over other methods. This article sought to propound a machine learning (ML) method for the automated categorization of breast lesions-categorized as benign, probably benign, suspicious, or malignant-using features extracted from the accumulated US RF time series. In this research, 220 data points of the categories as mentioned earlier, recorded from 118 patients, were analyzed. The RFTSBU dataset was registered by a SuperSonic Imagine Aixplorer® medical/research system fitted with a linear transducer. The expert radiologist manually selected regions of interest (ROIs) in B-mode images before extracting 283 features from each ROI in the ML approach, utilizing textural features such as Gabor filter (GF), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size zone matrix (GLSZM), and gray-level dependence matrix (GLDM). Subsequently, the particle swarm optimization (PSO) narrowed the features to 131 highly effective ones. Ultimately, the features underwent classification using an innovative multi-origin method classification (MOMC), marking a significant leap in BC diagnosis. Employing 5-fold cross-validation, the study achieved notable accuracy rates of 98.57 ± 1.09%, 91.53 ± 0.89%, and 83.71 ± 1.30% for 2-, 3-, and 4-class classifications, respectively, using MOMC-SVM and MOMC-ensemble classifiers. This research introduces an innovative ML-based approach to differentiate between diverse breast lesion types using in vivo US RF time series data. The findings underscore its efficacy in enhancing classification accuracy, promising significant strides in computer-aided diagnosis (CAD) for BC screening.

A Deep Learning-Based De-Artifact Diffusion Model for Removing Motion Artifacts in Knee MRI.

Li Y, Gong T, Zhou Q, Wang H, Yan X, Xi Y, Shi Z, Deng W, Shi F, Wang Y

pubmed logopapersJun 30 2025
Motion artifacts are common for knee MRI, which usually lead to rescanning. Effective removal of motion artifacts would be clinically useful. To construct an effective deep learning-based model to remove motion artifacts for knee MRI using real-world data. Retrospective. Model construction: 90 consecutive patients (1997 2D slices) who had knee MRI images with motion artifacts paired with immediately rescanned images without artifacts served as ground truth. Internal test dataset: 25 patients (795 slices) from another period; external test dataset: 39 patients (813 slices) from another hospital. 3-T/1.5-T knee MRI with T1-weighted imaging, T2-weighted imaging, and proton-weighted imaging. A deep learning-based supervised conditional diffusion model was constructed. Objective metrics (root mean square error [RMSE], peak signal-to-noise ratio [PSNR], structural similarity [SSIM]) and subjective ratings were used for image quality assessment, which were compared with three other algorithms (enhanced super-resolution [ESR], enhanced deep super-resolution, and ESR using a generative adversarial network). Diagnostic performance of the output images was compared with the rescanned images. The Kappa Test, Pearson chi-square test, Fredman's rank-sum test, and the marginal homogeneity test. A p value < 0.05 was considered statistically significant. Subjective ratings showed significant improvements in the output images compared to the input, with no significant difference from the ground truth. The constructed method demonstrated the smallest RMSE (11.44  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  5.47 in the validation cohort; 13.95  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  4.32 in the external test cohort), the largest PSNR (27.61  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  3.20 in the validation cohort; 25.64  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  2.67 in the external test cohort) and SSIM (0.97  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the validation cohort; 0.94  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the external test cohort) compared to the other three algorithms. The output images achieved comparable diagnostic capability as the ground truth for multiple anatomical structures. The constructed model exhibited feasibility and effectiveness, and outperformed multiple other algorithms for removing motion artifacts in knee MRI. Level 3. Stage 2.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.

Using a large language model for post-deployment monitoring of FDA approved AI: pulmonary embolism detection use case.

Sorin V, Korfiatis P, Bratt AK, Leiner T, Wald C, Butler C, Cook CJ, Kline TL, Collins JD

pubmed logopapersJun 30 2025
Artificial intelligence (AI) is increasingly integrated into clinical workflows. The performance of AI in production can diverge from initial evaluations. Post-deployment monitoring (PDM) remains a challenging ingredient of ongoing quality assurance once AI is deployed in clinical production. To develop and evaluate a PDM framework that uses large language models (LLMs) for free-text classification of radiology reports, and human oversight. We demonstrate its application to monitor a commercially vended pulmonary embolism (PE) detection AI (CVPED). We retrospectively analyzed 11,999 CT pulmonary angiography (CTPA) studies performed between 04/30/2023-06/17/2024. Ground truth was determined by combining LLM-based radiology-report classification and the CVPED outputs, with human review of discrepancies. We simulated a daily monitoring framework to track discrepancies between CVPED and the LLM. Drift was defined when discrepancy rate exceeded a fixed 95% confidence interval (CI) for seven consecutive days. The CI and the optimal retrospective assessment period were determined from a stable dataset with consistent performance. We simulated drift by systematically altering CVPED or LLM sensitivity and specificity, and we modeled an approach to detect data shifts. We incorporated a human-in-the-loop selective alerting framework for continuous prospective evaluation and to investigate potential for incremental detection. Of 11,999 CTPAs, 1,285 (10.7%) had PE. Overall, 373 (3.1%) had discrepant classifications between CVPED and LLM. Among 111 CVPED-positive and LLM-negative cases, 29 would have triggered an alert due to the radiologist not interacting with CVPED. Of those, 24 were CVPED false-positives, one was an LLM false-negative, and the framework ultimately identified four true-alerts for incremental PE cases. The optimal retrospective assessment period for drift detection was determined to be two months. A 2-3% decline in model specificity caused a 2-3-fold increase in discrepancies, while a 10% drop in sensitivity was required to produce a similar effect. For example, a 2.5% drop in LLM specificity led to a 1.7-fold increase in CVPED-negative-LLM-positive discrepancies, which would have taken 22 days to detect using the proposed framework. A PDM framework combining LLM-based free-text classification with a human-in-the-loop alerting system can continuously track an image-based AI's performance, alert for performance drift, and provide incremental clinical value.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.

Multicenter Evaluation of Interpretable AI for Coronary Artery Disease Diagnosis from PET Biomarkers

Zhang, W., Kwiecinski, J., Shanbhag, A., Miller, R. J., Ramirez, G., Yi, J., Han, D., Dey, D., Grodecka, D., Grodecki, K., Lemley, M., Kavanagh, P., Liang, J. X., Zhou, J., Builoff, V., Hainer, J., Carre, S., Barrett, L., Einstein, A. J., Knight, S., Mason, S., Le, V., Acampa, W., Wopperer, S., Chareonthaitawee, P., Berman, D. S., Di Carli, M. F., Slomka, P.

medrxiv logopreprintJun 30 2025
BackgroundPositron emission tomography (PET)/CT for myocardial perfusion imaging (MPI) provides multiple imaging biomarkers, often evaluated separately. We developed an artificial intelligence (AI) model integrating key clinical PET MPI parameters to improve the diagnosis of obstructive coronary artery disease (CAD). MethodsFrom 17,348 patients undergoing cardiac PET/CT across four sites, we retrospectively enrolled 1,664 subjects who had invasive coronary angiography within 180 days and no prior CAD. Deep learning was used to derive coronary artery calcium score (CAC) from CT attenuation correction maps. XGBoost machine learning model was developed using data from one site to detect CAD, defined as left main stenosis [&ge;]50% or [&ge;]70% in other arteries. The model utilized 10 image-derived parameters from clinical practice: CAC, stress/rest left ventricle ejection fraction, stress myocardial blood flow (MBF), myocardial flow reserve (MFR), ischemic and stress total perfusion deficit (TPD), transient ischemic dilation ratio, rate pressure product, and sex. Generalizability was evaluated in the remaining three sites--chosen to maximize testing power and capture inter-site variability--and model performance was compared with quantitative analyses using the area under the receiver operating characteristic curve (AUC). Patient-specific predictions were explained using shapley additive explanations. ResultsThere was a 61% and 53% CAD prevalence in the training (n=386) and external testing (n=1,278) set, respectively. In the external evaluation, the AI model achieved a higher AUC (0.83 [95% confidence interval (CI): 0.81-0.85]) compared to clinical score by experienced physicians (0.80 [0.77-0.82], p=0.02), ischemic TPD (0.79 [0.77-0.82], p<0.001), MFR (0.75 [0.72-0.78], p<0.001), and CAC (0.69 [0.66-0.72], p<0.001). The models performances were consistent in sex, body mass index, and age groups. The top features driving the prediction were stress/ischemic TPD, CAC, and MFR. ConclusionAI integrating perfusion, flow, and CAC scoring improves PET MPI diagnostic accuracy, offering automated and interpretable predictions for CAD diagnosis.

Cognition-Eye-Brain Connection in Alzheimer's Disease Spectrum Revealed by Multimodal Imaging.

Shi Y, Shen T, Yan S, Liang J, Wei T, Huang Y, Gao R, Zheng N, Ci R, Zhang M, Tang X, Qin Y, Zhu W

pubmed logopapersJun 29 2025
The connection between cognition, eye, and brain remains inconclusive in Alzheimer's disease (AD) spectrum disorders. To explore the relationship between cognitive function, retinal biometrics, and brain alterations in the AD spectrum. Prospective. Healthy control (HC) (n = 16), subjective cognitive decline (SCD) (n = 35), mild cognitive impairment (MCI) (n = 18), and AD group (n = 7). 3-T, 3D T1-weighted Brain Volume (BRAVO) and resting-state functional MRI (fMRI). In all subgroups, cortical thickness was measured from BRAVO and segmented using the Desikan-Killiany-Tourville (DKT) atlas. The fractional amplitude of low-frequency fluctuations (FALFF) and regional homogeneity (ReHo) were measured in fMRI using voxel-based analysis. The eye was imaged by optical coherence tomography angiography (OCTA), with the deep learning model FARGO segmenting the foveal avascular zone (FAZ) and retinal vessels. FAZ area and perimeter, retinal blood vessels curvature (RBVC), thicknesses of the retinal nerve fiber layer (RNFL) and ganglion cell layer-inner plexiform layer (GCL-IPL) were calculated. Cognition-eye-brain associations were compared across the HC group and each AD spectrum stage using multivariable linear regression. Multivariable linear regression analysis. Statistical significance was set at p < 0.05 with FWE correction for fMRI and p < 1/62 (Bonferroni-corrected) for structural analyses. Reductions of FALFF in temporal regions, especially the left superior temporal gyrus (STG) in MCI patients, were linked to decreased RNFL thickness and increased FAZ area significantly. In AD patients, reduced ReHo values in occipital regions, especially the right middle occipital gyrus (MOG), were significantly associated with an enlarged FAZ area. The SCD group showed widespread cortical thickening significantly associated with all aforementioned retinal biometrics, with notable thickening in the right fusiform gyrus (FG) and right parahippocampal gyrus (PHG) correlating with reduced GCL-IPL thickness. Brain function and structure may be associated with cognition and retinal biometrics across the AD spectrum. Specifically, cognition-eye-brain connections may be present in SCD. 2. 3.

MedRegion-CT: Region-Focused Multimodal LLM for Comprehensive 3D CT Report Generation

Sunggu Kyung, Jinyoung Seo, Hyunseok Lim, Dongyeong Kim, Hyungbin Park, Jimin Sung, Jihyun Kim, Wooyoung Jo, Yoojin Nam, Namkug Kim

arxiv logopreprintJun 29 2025
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.

Cardiac Measurement Calculation on Point-of-Care Ultrasonography with Artificial Intelligence

Mercaldo, S. F., Bizzo, B. C., Sadore, T., Halle, M. A., MacDonald, A. L., Newbury-Chaet, I., L'Italien, E., Schultz, A. S., Tam, V., Hegde, S. M., Mangion, J. R., Mehrotra, P., Zhao, Q., Wu, J., Hillis, J.

medrxiv logopreprintJun 28 2025
IntroductionPoint-of-care ultrasonography (POCUS) enables clinicians to obtain critical diagnostic information at the bedside especially in resource limited settings. This information may include 2D cardiac quantitative data, although measuring the data manually can be time-consuming and subject to user experience. Artificial intelligence (AI) can potentially automate this quantification. This study assessed the interpretation of key cardiac measurements on POCUS images by an AI-enabled device (AISAP Cardio V1.0). MethodsThis retrospective diagnostic accuracy study included 200 POCUS cases from four hospitals (two in Israel and two in the United States). Each case was independently interpreted by three cardiologists and the device for seven measurements (left ventricular (LV) ejection fraction, inferior vena cava (IVC) maximal diameter, left atrial (LA) area, right atrial (RA) area, LV end diastolic diameter, right ventricular (RV) fractional area change and aortic root diameter). The endpoints were the root mean square error (RMSE) of the device compared to the average cardiologist measurement (LV ejection fraction and IVC maximal diameter were primary endpoints; the other measurements were secondary endpoints). Predefined passing criteria were based on the upper bounds of the RMSE 95% confidence intervals (CIs). The inter-cardiologist RMSE was also calculated for reference. ResultsThe device achieved the passing criteria for six of the seven measurements. While not achieving the passing criterion for RV fractional area change, it still achieved a better RMSE than the inter-cardiologist RMSE. The RMSE was 6.20% (95% CI: 5.57 to 6.83; inter-cardiologist RMSE of 8.23%) for LV ejection fraction, 0.25cm (95% CI: 0.20 to 0.29; 0.36cm) for IVC maximal diameter, 2.39cm2 (95% CI: 1.96 to 2.82; 4.39cm2) for LA area, 2.11cm2 (95% CI: 1.75 to 2.47; 3.49cm2) for RA area, 5.06mm (95% CI: 4.58 to 5.55; 4.67mm) for LV end diastolic diameter, 10.17% (95% CI: 9.01 to 11.33; 14.12%) for RV fractional area change and 0.19cm (95% CI: 0.16 to 0.21; 0.24cm) for aortic root diameter. DiscussionThe device accurately calculated these cardiac measurements especially when benchmarked against inter-cardiologist variability. Its use could assist clinicians who utilize POCUS and better enable their clinical decision-making.
Page 55 of 94933 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.