Sort by:
Page 6 of 4454447 results

Low-Count PET Image Reconstruction with Generalized Sparsity Priors via Unrolled Deep Networks.

Fu M, Fang M, Liao B, Liang D, Hu Z, Wu FX

pubmed logopapersSep 29 2025
Deep learning has demonstrated remarkable efficacy in reconstructing low-count PET (Positron Emission Tomography) images, attracting considerable attention in the medical imaging community. However, most existing deep learning approaches have not fully exploited the unique physical characteristics of PET imaging in the design of fidelity and prior regularization terms, resulting in constrained model performance and interpretability. In light of these considerations, we introduce an unrolled deep network based on maximum likelihood estimation for the Poisson distribution and a Generalized domain transformation for Sparsity learning, dubbed GS-Net. To address this complex optimization challenge, we employ the Alternating Direction Method of Multipliers (ADMM) framework, integrating a modified Expectation Maximization (EM) approach to address the primary objective and utilize the shrinkage thresholding approach to optimize the L1 norm term. Additionally, within this unrolled deep network, all hyperparameters are adaptively adjusted through end-to-end learning to eliminate the need for manual parameter tuning. Through extensive experiments on simulated patient brain datasets and real patient whole-body clinical datasets with multiple count levels, our method has demonstrated advanced performance compared to traditional non-iterative and iterative reconstruction, deep learning-based direct reconstruction, and hybrid unrolled methods, as demonstrated by qualitative and quantitative evaluations.

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Evaluation of Context-Aware Prompting Techniques for Classification of Tumor Response Categories in Radiology Reports Using Large Language Model.

Park J, Sim WS, Yu JY, Park YR, Lee YH

pubmed logopapersSep 29 2025
Radiology reports are essential for medical decision-making, providing crucial data for diagnosing diseases, devising treatment plans, and monitoring disease progression. While large language models (LLMs) have shown promise in processing free-text reports, research on effective prompting techniques for radiologic applications remains limited. To evaluate the effectiveness of LLM-driven classification based on radiology reports in terms of tumor response category (TRC), and to optimize the model through a comparison of four different prompt engineering techniques for effectively performing this classification task in clinical applications, we included 3062 whole-spine contrast-enhanced magnetic resonance imaging (MRI) radiology reports for prompt engineering and validation. TRCs were labeled by two radiologists based on criteria modified from the Response Evaluation Criteria in Solid Tumors (RECIST) guidelines. The Llama3 instruct model was used to classify TRCs in this study through four different prompts: General, In-Context Learning (ICL), Chain-of-Thought (CoT), and ICL with CoT. AUROC, accuracy, precision, recall, and F1-score were calculated against each prompt and model (8B, 70B) with the test report dataset. The average AUROC for ICL (0.96 internally, 0.93 externally) and ICL with CoT prompts (0.97 internally, 0.94 externally) outperformed other prompts. Error increased with prompt complexity, including 0.8% incomplete sentence errors and 11.3% probability-classification inconsistencies. This study demonstrates that context-aware LLM prompts substantially improved the efficiency and effectiveness of classifying TRCs from radiology reports, despite potential intrinsic hallucinations. While further improvements are required for real-world application, our findings suggest that context-aware prompts have significant potential for segmenting complex radiology reports and enhancing oncology clinical workflows.

Elemental composition analysis of calcium-based urinary stones via laser-induced breakdown spectroscopy for enhanced clinical insights.

Xie H, Huang J, Wang R, Ma X, Xie L, Zhang H, Li J, Liu C

pubmed logopapersSep 29 2025
The purpose of this study was to profile elemental composition of calcium-based urinary stones using laser-induced breakdown spectroscopy (LIBS) and develop a machine learning model to distinguish recurrence-associated profiles by integrating elemental and clinical data. A total of 122 calcium-based stones (41 calcium oxalate, 11 calcium phosphate, 49 calcium oxalate/calcium phosphate, 8 calcium oxalate/uric acid, 13 calcium phosphate/struvite) were analyzed via LIBS. Elemental intensity ratios (H/Ca, P/Ca, Mg/Ca, Sr/Ca, Na/Ca, K/Ca) were calculated using Ca (396.847 nm) as reference. Clinical variables (demographics, laboratory and imaging results, recurrence status) were retrospectively collected. A back propagation neural network (BPNN) model was trained using four data strategies: clinical-only, spectral principal components (PCs), combined PCs plus clinical, and merged raw spectral plus clinical data. The performance of these four models was evaluated. Sixteen stone samples from other medical centers were used as external validation sets. Mg and Sr were detected in most of stones. Significant correlations existed among P, Mg, Sr, and K ratios. Recurrent patients showed elevated elemental ratios (p < 0.01), higher urine pH (p < 0.01), and lower stone CT density (p = 0.044). The BPNN model with merged spectral plus clinical data achieved optimal performance in classification (test set accuracy: 94.37%), significantly outperforming clinical-only models (test set accuracy: 73.37%). The results of external validation indicate that the model has good generalization ability. LIBS reveals ubiquitous Mg and Sr in calcium-based stones and elevated elemental ratios in recurrent cases. Integration of elemental profiles with clinical data enables high-accuracy classification of recurrence-associated profiles, providing insights for potential risk stratification in urolithiasis management.

MMRQA: Signal-Enhanced Multimodal Large Language Models for MRI Quality Assessment

Fankai Jia, Daisong Gan, Zhe Zhang, Zhaochi Wen, Chenchen Dan, Dong Liang, Haifeng Wang

arxiv logopreprintSep 29 2025
Magnetic resonance imaging (MRI) quality assessment is crucial for clinical decision-making, yet remains challenging due to data scarcity and protocol variability. Traditional approaches face fundamental trade-offs: signal-based methods like MRIQC provide quantitative metrics but lack semantic understanding, while deep learning approaches achieve high accuracy but sacrifice interpretability. To address these limitations, we introduce the Multimodal MRI Quality Assessment (MMRQA) framework, pioneering the integration of multimodal large language models (MLLMs) with acquisition-aware signal processing. MMRQA combines three key innovations: robust metric extraction via MRQy augmented with simulated artifacts, structured transformation of metrics into question-answer pairs using Qwen, and parameter-efficient fusion through Low-Rank Adaptation (LoRA) of LLaVA-OneVision. Evaluated on MR-ART, FastMRI, and MyConnectome benchmarks, MMRQA achieves state-of-the-art performance with strong zero-shot generalization, as validated by comprehensive ablation studies. By bridging quantitative analysis with semantic reasoning, our framework generates clinically interpretable outputs that enhance quality control in dynamic medical settings.

MetaChest: Generalized few-shot learning of patologies from chest X-rays

Berenice Montalvo-Lezama, Gibran Fuentes-Pineda

arxiv logopreprintSep 29 2025
The limited availability of annotated data presents a major challenge for applying deep learning methods to medical image analysis. Few-shot learning methods aim to recognize new classes from only a small number of labeled examples. These methods are typically studied under the standard few-shot learning setting, where all classes in a task are new. However, medical applications such as pathology classification from chest X-rays often require learning new classes while simultaneously leveraging knowledge of previously known ones, a scenario more closely aligned with generalized few-shot classification. Despite its practical relevance, few-shot learning has been scarcely studied in this context. In this work, we present MetaChest, a large-scale dataset of 479,215 chest X-rays collected from four public databases. MetaChest includes a meta-set partition specifically designed for standard few-shot classification, as well as an algorithm for generating multi-label episodes. We conduct extensive experiments evaluating both a standard transfer learning approach and an extension of ProtoNet across a wide range of few-shot multi-label classification tasks. Our results demonstrate that increasing the number of classes per episode and the number of training examples per class improves classification performance. Notably, the transfer learning approach consistently outperforms the ProtoNet extension, despite not being tailored for few-shot learning. We also show that higher-resolution images improve accuracy at the cost of additional computation, while efficient model architectures achieve comparable performance to larger models with significantly reduced resource requirements.

Novel multi-task learning for Alzheimer's stage classification using hippocampal MRI segmentation, feature fusion, and nomogram modeling.

Hu W, Du Q, Wei L, Wang D, Zhang G

pubmed logopapersSep 29 2025
To develop and validate a comprehensive and interpretable framework for multi-class classification of Alzheimer's disease (AD) progression stages based on hippocampal MRI, integrating radiomic, deep, and clinical features. This retrospective multi-center study included 2956 patients across four AD stages (Non-Demented, Very Mild Demented, Mild Demented, Moderate Demented). T1-weighted MRI scans were processed through a standardized pipeline involving hippocampal segmentation using four models (U-Net, nnU-Net, Swin-UNet, MedT). Radiomic features (n = 215) were extracted using the SERA platform, and deep features (n = 256) were learned using an LSTM network with attention applied to hippocampal slices. Fused features were harmonized with ComBat and filtered by ICC (≥ 0.75), followed by LASSO-based feature selection. Classification was performed using five machine learning models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron (MLP), and eXtreme Gradient Boosting (XGBoost). Model interpretability was addressed using SHAP, and a nomogram and decision curve analysis (DCA) were developed. Additionally, an end-to-end 3D CNN-LSTM model and two transformer-based benchmarks (Vision Transformer, Swin Transformer) were trained for comparative evaluation. MedT achieved the best hippocampal segmentation (Dice = 92.03% external). Fused features yielded the highest classification performance with XGBoost (external accuracy = 92.8%, AUC = 94.2%). SHAP identified MMSE, hippocampal volume, and APOE ε4 as top contributors. The nomogram accurately predicted early-stage AD with clinical utility confirmed by DCA. The end-to-end model performed acceptably (AUC = 84.0%) but lagged behind the fused pipeline. Statistical tests confirmed significant performance advantages for feature fusion and MedT-based segmentation. This study demonstrates that integrating radiomics, deep learning, and clinical data from hippocampal MRI enables accurate and interpretable classification of AD stages. The proposed framework is robust, generalizable, and clinically actionable, representing a scalable solution for AD diagnostics.

BALR-SAM: Boundary-Aware Low-Rank Adaptation of SAM for Resource-Efficient Medical Image Segmentation

Zelin Liu, Sicheng Dong, Bocheng Li, Yixuan Yang, Jiacheng Ruan, Chenxu Zhou, Suncheng Xiang

arxiv logopreprintSep 29 2025
Vision foundation models like the Segment Anything Model (SAM), pretrained on large-scale natural image datasets, often struggle in medical image segmentation due to a lack of domain-specific adaptation. In clinical practice, fine-tuning such models efficiently for medical downstream tasks with minimal resource demands, while maintaining strong performance, is challenging. To address these issues, we propose BALR-SAM, a boundary-aware low-rank adaptation framework that enhances SAM for medical imaging. It combines three tailored components: (1) a Complementary Detail Enhancement Network (CDEN) using depthwise separable convolutions and multi-scale fusion to capture boundary-sensitive features essential for accurate segmentation; (2) low-rank adapters integrated into SAM's Vision Transformer blocks to optimize feature representation and attention for medical contexts, while simultaneously significantly reducing the parameter space; and (3) a low-rank tensor attention mechanism in the mask decoder, cutting memory usage by 75% and boosting inference speed. Experiments on standard medical segmentation datasets show that BALR-SAM, without requiring prompts, outperforms several state-of-the-art (SOTA) methods, including fully fine-tuned MedSAM, while updating just 1.8% (11.7M) of its parameters.

End-to-end Spatiotemporal Analysis of Color Doppler Echocardiograms: Application for Rheumatic Heart Disease Detection.

Roshanitabrizi P, Nath V, Brown K, Broudy TG, Jiang Z, Parida A, Rwebembera J, Okello E, Beaton A, Roth HR, Sable CA, Linguraru MG

pubmed logopapersSep 29 2025
Rheumatic heart disease (RHD) represents a significant global health challenge, disproportionately affecting over 40 million people in low- and middle-income countries. Early detection through color Doppler echocardiography is crucial for treating RHD, but it requires specialized physicians who are often scarce in resource-limited settings. To address this disparity, artificial intelligence (AI)-driven tools for RHD screening can provide scalable, autonomous solutions to improve access to critical healthcare services in underserved regions. This paper introduces RADAR (Rapid AI-Assisted Echocardiography Detection and Analysis of RHD), a novel and generalizable AI approach for end-to-end spatiotemporal analysis of color Doppler echocardiograms, aimed at detecting early RHD in resource-limited settings. RADAR identifies key imaging views and employs convolutional neural networks to analyze diagnostically relevant phases of the cardiac cycle. It also localizes essential anatomical regions and examines blood flow patterns. It then integrates all findings into a cohesive analytical framework. RADAR was trained and validated on 1,022 echocardiogram videos from 511 Ugandan children, acquired using standard portable ultrasound devices. An independent set of 318 cases, acquired using a handheld ultrasound device with diverse imaging characteristics, was also tested. On the validation set, RADAR outperformed existing methods, achieving an average accuracy of 0.92, sensitivity of 0.94, and specificity of 0.90. In independent testing, it maintained high, clinically acceptable performance, with an average accuracy of 0.79, sensitivity of 0.87, and specificity of 0.70. These results highlight RADAR's potential to improve RHD detection and promote health equity for vulnerable children by enhancing timely, accurate diagnoses in underserved regions.

Predicting pathological complete response to chemoradiotherapy using artificial intelligence-based magnetic resonance imaging radiomics in esophageal squamous cell carcinoma.

Hirata A, Hayano K, Tochigi T, Kurata Y, Shiraishi T, Sekino N, Nakano A, Matsumoto Y, Toyozumi T, Uesato M, Ohira G

pubmed logopapersSep 28 2025
Advanced esophageal squamous cell carcinoma (ESCC) has an extremely poor prognosis. Preoperative chemoradiotherapy (CRT) can significantly prolong survival, especially in those who achieve pathological complete response (pCR). However, the pretherapeutic prediction of pCR remains challenging. To predict pCR and survival in ESCC patients undergoing CRT using an artificial intelligence (AI)-based diffusion-weighted magnetic resonance imaging (DWI-MRI) radiomics model. We retrospectively analyzed 70 patients with ESCC who underwent curative surgery following CRT. For each patient, pre-treatment tumors were semi-automatically segmented in three dimensions from DWI-MRI images (<i>b</i> = 0, 1000 second/mm²), and a total of 76 radiomics features were extracted from each segmented tumor. Using these features as explanatory variables and pCR as the objective variable, machine learning models for predicting pCR were developed using AutoGluon, an automated machine learning library, and validated by stratified double cross-validation. pCR was achieved in 15 patients (21.4%). Apparent diffusion coefficient skewness demonstrated the highest predictive performance [area under the curve (AUC) = 0.77]. Gray-level co-occurrence matrix (GLCM) entropy (<i>b</i> = 1000 second/mm²) was an independent prognostic factor for relapse-free survival (RFS) (hazard ratio = 0.32, <i>P</i> = 0.009). In Kaplan-Meier analysis, patients with high GLCM entropy showed significantly better RFS (<i>P</i> < 0.001, log-rank). The best-performing machine learning model achieved an AUC of 0.85. The predicted pCR-positive group showed significantly better RFS than the predicted pCR-negative group (<i>P</i> = 0.007, log-rank). AI-based radiomics analysis of DWI-MRI images in ESCC has the potential to accurately predict the effect of CRT before treatment and contribute to constructing optimal treatment strategies.
Page 6 of 4454447 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.