Sort by:
Page 24 of 6046038 results

Chen H, Han J, Huang H, He Q, Ren X, Yu F, Chang C, Ding X, Luo Q

pubmed logopapersOct 15 2025
Multiple myeloma (MM) is a heterogeneous malignancy with prognosis significantly affected by high-risk cytogenetic abnormalities (HRCAs). Traditional detection using fluorescence in situ hybridisation is invasive and limited in capturing disease heterogeneity. We aimed to develop and validate radiomics model based on pretreatment [18F] fluoro-deoxyglucose (FDG) positron emission tomography/computed tomographic (18F-FDG PET/CT) imaging to non-invasively predict HRCAs in newly diagnosed MM patients. Among the 42 candidate models, the Decision Tree classifier utilizing PET active lesions features demonstrated optimal performance in the validation cohort, exhibiting excellent predictive ability (Area Under the Curve (AUC) = 0.89), significantly outperforming the PET metrics model (AUC = 0.84) and clinical model (AUC = 0.74). SHapley Additive exPlanations analysis identified the PET-derived feature as the most important contributor to the model's predictive capacity. The model stratified patients into high-risk and low-risk groups, with the high-risk group exhibiting significantly worse PFS and OS (median PFS: high-risk 24.5 months vs. low-risk 29 months; p = 0.0360; median OS: high-risk 33.5 months vs. low-risk 50 months; p = 0.0023). As a non-invasive imaging biomarker, PET/CT radiomics holds potential for predicting high-risk cytogenetic status and facilitating patient prognosis stratification Further large-scale, multi-center prospective validations are essential to confirm its utility for personalized therapeutic decision-making in MM.

Saadeh Z, Demirel N, Horst KK, Iyer VN, Koo CW, Larson NB, McCollough CH, Oo D, Tandon YK, Thorne JE, Zhou Z, Yu L, Hull NC

pubmed logopapersOct 15 2025
To determine the feasibility of reduced-dose chest computed tomographic angiography (CTA) with convolutional neural network (CNN) denoising for detecting pulmonary arteriovenous malformations (pAVMs) in children with hereditary hemorrhagic telangiectasia (cwHHT). Fifteen cwHHT underwent a chest CTA (ie, a controlled "study" dose). Noise was inserted to simulate a quarter dose (QD) exam. Images were reconstructed using iterative reconstruction (IR) and our self-trained CNN denoising model. For each case, 3 sets of images were created: study dose (SD)+IR, QD+IR, and QD+CNN. Two thoracic radiologists independently scored each set to assess quality, spatial resolution, artifacts, and the presence of pAVMs using 4-level ordinal scales. Quantitative assessments of image quality were performed using contrast-to-noise ratios (CNRs) with comparisons made between the experimental conditions. Thirteen of the 15 patients recruited with hereditary hemorrhagic telangiectasia (mean age: 9.3±4.5 y) were positive for pAVM by transthoracic contrast echocardiography. The sensitivities using QD+CNN were 0.85 and 1.00 for readers 1 and 2, respectively. This was compared with 0.69 and 0.84 using QD+IR versus 0.85 and 0.92 for SD+IR. Inter-reader agreement for pAVM detection utilizing QD+CNN was moderate and resulted in kappa=0.59 (P=0.012). The subjective assessments for QD+CNN were comparable to the SD technique. Regression analysis of reader scores revealed improved quality in QD+CNN versus QD+IR (P=0.001). Similarly, the QD+CNN condition demonstrated the highest CNRs. Reduced-dose chest CTA with CNN denoising provides a level of sensitivity comparable to standard dose CTA and high CNRs for the detection of pAVMs in cwHHT.

Ismayilov R, Altundag O, Gencoglu EA, Aktas A, Alparslan S, Ozcicek A, Turhanoglu D, Oguz A, Farzaliyeva A, Ramazanoglu MN, Kocak M, Akcali Z

pubmed logopapersOct 15 2025
Accurate and timely assessment of immunotherapy response is vital for optimizing lung cancer management. This study evaluates the efficacy of large language models (LLMs) in automating response assessment using positron emission tomography/computed tomography (PET/CT) reports based on the European Organization for Research and Treatment of Cancer (EORTC) criteria. An effective prompting strategy was developed using Google Gemini 2.5 Pro Experimental 03-25, with explicit instructions for applying EORTC criteria via few-shot prompting. This prompt was then tested with both Gemini 2.5 Pro and OpenAI ChatGPT 4o to assess cross-model performance. Pre- and post-immunotherapy PET-CT reports in text format from 36 lung cancer patients were independently classified by the LLMs and an experienced nuclear medicine specialist. Performance metrics, including precision, recall, F1-score, and support, were calculated for each response category. Inter-rater agreement was assessed using Cohen's Kappa. The nuclear medicine specialist classified 5, 21, 6, and 4 reports as complete metabolic response (CMR), progressive metabolic disease (PMD), partial metabolic response (PMR), and stable metabolic disease (SMD), respectively, while Gemini 2.5 Pro classified 4, 21, 8, and 3 of them. Gemini achieved an overall accuracy of 94% and demonstrated strong agreement with the expert (overall Cohen's Kappa: 0.907). F1-scores were 0.86 for PMR and SMD, 0.89 for CMR, and 1.00 for PMD, with per-label Kappa scores ranging from 0.824 (PMR) to 1.00 (PMD). In comparison, ChatGPT 4o achieved perfect agreement with the expert across all 36 cases (accuracy = 100%, Cohen's Kappa = 1.000). When guided by a structured and task-specific prompt, both Gemini 2.5 Pro and ChatGPT 4o demonstrated strong capability for automating accurate immunotherapy response assessment in lung cancer using PET-CT reports. These results underscore the potential of LLMs to streamline clinical workflows and improve efficiency. Validation with larger data sets is warranted to support clinical implementation.

Chunyu H, Chen Y, Xue S, Zhang X, Miao Y, Guo R, Li B, Shi K

pubmed logopapersOct 15 2025
Efforts to reduce the radiation burden of PET/CT have driven the increasing development of AI-based CT-less PET imaging techniques. However, comprehensive clinical evaluations of these approaches remain limited. This study aimed to rigorously assess whether deep learning (DL)-based PET reconstruction can eliminate the need for CT-derived attenuation and scatter correction while maintaining image quality sufficient for reliable clinical diagnosis. In this dual-center retrospective analysis, raw PET/CT data from 359 patients were evaluated across 4 scanners and 4 tracers. Each dataset underwent four reconstruction approaches: (1) CT-based attenuation and scatter correction (CT-ASC, reference standard); (2) conventional 2D-DL; (3) conventional 3D-DL; and (4) our novel Decomposition-based DL algorithm. Diagnostic quality of reconstructed images was systematically assessed via visual scoring (5-point Likert scale), diagnostic accuracy (lesion-based false-positive/negative rates), and semi-quantitative metrics (SUVmax consistency). Visual analysis demonstrated the superior performance of Decomposition-based DL compared to conventional 2D-DL and 3D-DL (p < 0.001 for all comparisons). Furthermore, the proposed method exhibited the lowest false-negative and false-positive rates (0.56% false positives with SIEMENS Vision 600; zero rates in other cases). Semi-quantitative analysis showed that although Decomposition-based DL did not consistently yield the lowest mean absolute percentage error values compared to controls, it maintained strong agreement with CT-ASC in most cases. This dual-center study demonstrates that decomposition-based DL CT-free PET imaging outperforms conventional DL methods, achieving diagnostic accuracy comparable to CT-based attenuation correction in most cases. This clinical evaluation provides valuable insights to guide further methodological development and support clinical translation of CT-free PET imaging.

Sobeh T, Shrot S, Bakon M, Yaniv G, Orion D, Konen E, Hoffmann C

pubmed logopapersOct 15 2025
The early identification of intracranial aneurysms (IAs) enables risk stratification and the timely initiation of optimal management. This study aimed to identify patients with missed aneurysms for follow-up and possible treatment, and to evaluate the effectiveness of a commercial deep learning algorithm in retrospectively detecting missed IAs on CTA. All consecutive head CTA studies of adult patients performed at a single referral center between February 18, 2020, and July 31, 2022, were retrospectively collected. A machine learning algorithm using natural language processing (NLP) classified radiology reports as positive or negative for aneurysms, and a convolutional neural network (CNN) algorithm analyzed the imaging data. Concordant results with the original reports were accepted as ground truth, while discordant cases were reviewed by three neuroradiologists, with majority voting determining the reference standard. A total of 2,615 head CTA studies were analyzed. the algorithm flagged 34 suspected missed aneurysms, with 67% (23/34) confirmed as true positives by at least two neuroradiologists. This improved detection by 20.9% (23/110) or 0.88% of all studies. Most missed aneurysms were small (≤ 3 mm). There were 4 false negatives, resulting in a sensitivity of 96.36%, specificity of 99.56%, positive predictive value of 90.6%, and negative predictive value of 99.84%. This study highlights the potential of deep learning systems to detect missed intracranial aneurysms. Although the missed aneurysms in this cohort were predominantly small, follow-up or diagnostic digital subtraction angiography may still be warranted, depending on clinical characteristics and risk factors for aneurysm rupture.

Li S, Liu X, Qian W, Zhang Y, Lu Q, Liu P

pubmed logopapersOct 15 2025
The aim of this study was to explore the relationship between the femoral head diameter (FHD) and the degree of subluxation in developmental dysplasia of the hip (DDH) patients, and develop a machine-learning model for predicting acetabular component size in total hip arthroplasty (THA) according to demographic data and FHD. The FHD of 469 DDH patients from Longwood Valley medical database was measured, after excluding those with severe femoral head destruction, bone grafting, or augments. Its distribution and difference across Crowe and Hartofilakidis classifications were also assessed. Five machine-learning algorithms were developed to predict the size of the acetabular component, and the best model was determined according to the mean square error (MSE), root mean square error (RMSE), and R-squared values. The accuracy of the best model's cup size prediction was validated by comparing it with acetate templating and CT-based planning in a consecutive cohort from an independent institution. The FHD gradually decreased with increasing Crowe and Hartofilakidis classifications. The Pearson correlation coefficient between FHD and the size of the acetabular component was 0.60, indicating a moderate correlation. In the test set, the random forest model outperformed the other four models in terms of MSE (0.904), RMSE (0.951), and R-squared (0.919). In the external validation, the accuracy of this model was not significantly different from CT-based planning (80.0% vs 87.5%, p > 0.05), but outperformed acetate templating (80.0% vs 52.5%, p < 0.05), particularly for Crowe Type IV (81.8% vs 27.3%, p < 0.05). The FHD decreases with increasing degree of subluxation in DDH patients. The machine-learning model constructed by combining demographic parameters and FHD demonstrates significantly higher accuracy in acetabular component size planning compared to templating methods. This approach serving as an effective auxiliary tool or alternative when CT is unavailable.

Connor Lane, Daniel Z. Kaplan, Tanishq Mathew Abraham, Paul S. Scotti

arxiv logopreprintOct 15 2025
A key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemporal masked autoencoder (MAE) framework. We observe that masked fMRI modeling performance improves with dataset size according to a strict power scaling law. Downstream classification benchmarks show that our model learns rich representations supporting both fine-grained state decoding across subjects, as well as subject-specific trait decoding across changes in brain state. This work is part of an ongoing open science project to build foundation models for fMRI data. Our code and datasets are available at https://github.com/MedARC-AI/fmri-fm.

Mikolaj Walczak, Uttej Kallakuri, Edward Humes, Xiaomin Lin, Tinoosh Mohsenin

arxiv logopreprintOct 15 2025
Vision Transformers (ViTs) have demonstrated strong capabilities in interpreting complex medical imaging data. However, their significant computational and memory demands pose challenges for deployment in real-time, resource-constrained mobile and wearable devices used in clinical environments. We introduce, BiTMedViT, a new class of Edge ViTs serving as medical AI assistants that perform structured analysis of medical images directly on the edge. BiTMedViT utilizes ternary- quantized linear layers tailored for medical imaging and com- bines a training procedure with multi-query attention, preserving stability under ternary weights with low-precision activations. Furthermore, BiTMedViT employs task-aware distillation from a high-capacity teacher to recover accuracy lost due to extreme quantization. Lastly, we also present a pipeline that maps the ternarized ViTs to a custom CUDA kernel for efficient memory bandwidth utilization and latency reduction on the Jetson Orin Nano. Finally, BiTMedViT achieves 86% diagnostic accuracy (89% SOTA) on MedMNIST across 12 datasets, while reducing model size by 43x, memory traffic by 39x, and enabling 16.8 ms inference at an energy efficiency up to 41x that of SOTA models at 183.62 GOPs/J on the Orin Nano. Our results demonstrate a practical and scientifically grounded route for extreme-precision medical imaging ViTs deployable on the edge, narrowing the gap between algorithmic advances and deployable clinical tools.

Hoda Kalabizadeh, Ludovica Griffanti, Pak-Hei Yeung, Ana I. L. Namburete, Nicola K. Dinsdale, Konstantinos Kamnitsas

arxiv logopreprintOct 15 2025
Deep learning models for medical image segmentation often struggle when deployed across different datasets due to domain shifts - variations in both image appearance, known as style, and population-dependent anatomical characteristics, referred to as content. This paper presents a novel unsupervised domain adaptation framework that directly addresses domain shifts encountered in cross-domain hippocampus segmentation from MRI, with specific emphasis on content variations. Our approach combines efficient style harmonisation through z-normalisation with a bidirectional deformable image registration (DIR) strategy. The DIR network is jointly trained with segmentation and discriminator networks to guide the registration with respect to a region of interest and generate anatomically plausible transformations that align source images to the target domain. We validate our approach through comprehensive evaluations on both a synthetic dataset using Morpho-MNIST (for controlled validation of core principles) and three MRI hippocampus datasets representing populations with varying degrees of atrophy. Across all experiments, our method outperforms existing baselines. For hippocampus segmentation, when transferring from young, healthy populations to clinical dementia patients, our framework achieves up to 15% relative improvement in Dice score compared to standard augmentation methods, with the largest gains observed in scenarios with substantial content shift. These results highlight the efficacy of our approach for accurate hippocampus segmentation across diverse populations.

Ana Lawry Aguila, Peirong Liu, Marina Crespo Aguirre, Juan Eugenio Iglesias

arxiv logopreprintOct 15 2025
Generating healthy counterfactuals from pathological images holds significant promise in medical imaging, e.g., in anomaly detection or for application of analysis tools that are designed for healthy scans. These counterfactuals should represent what a patient's scan would plausibly look like in the absence of pathology, preserving individual anatomical characteristics while modifying only the pathological regions. Denoising diffusion probabilistic models (DDPMs) have become popular methods for generating healthy counterfactuals of pathology data. Typically, this involves training on solely healthy data with the assumption that a partial denoising process will be unable to model disease regions and will instead reconstruct a closely matched healthy counterpart. More recent methods have incorporated synthetic pathological images to better guide the diffusion process. However, it remains challenging to guide the generative process in a way that effectively balances the removal of anomalies with the retention of subject-specific features. To solve this problem, we propose a novel application of denoising diffusion bridge models (DDBMs) - which, unlike DDPMs, condition the diffusion process not only on the initial point (i.e., the healthy image), but also on the final point (i.e., a corresponding synthetically generated pathological image). Treating the pathological image as a structurally informative prior enables us to generate counterfactuals that closely match the patient's anatomy while selectively removing pathology. The results show that our DDBM outperforms previously proposed diffusion models and fully supervised approaches at segmentation and anomaly detection tasks.
Page 24 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.