Sort by:
Page 221 of 3933923 results

Derivation and validation of an artificial intelligence-based plaque burden safety cut-off for long-term acute coronary syndrome from coronary computed tomography angiography.

Bär S, Knuuti J, Saraste A, Klén R, Kero T, Nabeta T, Bax JJ, Danad I, Nurmohamed NS, Jukema RA, Knaapen P, Maaniitty T

pubmed logopapersJun 30 2025
Artificial intelligence (AI) has enabled accurate and fast plaque quantification from coronary computed tomography angiography (CCTA). However, AI detects any coronary plaque in up to 97% of patients. To avoid overdiagnosis, a plaque burden safety cut-off for future coronary events is needed. Percent atheroma volume (PAV) was quantified with AI-guided quantitative computed tomography in a blinded fashion. Safety cut-off derivation was performed in the Turku CCTA registry (Finland), and pre-defined as ≥90% sensitivity for acute coronary syndrome (ACS). External validation was performed in the Amsterdam CCTA registry (the Netherlands). In the derivation cohort, 100/2271 (4.4%) patients experienced ACS (median follow-up 6.9 years). A threshold of PAV ≥ 2.6% was derived with 90.0% sensitivity and negative predictive value (NPV) of 99.0%. In the validation cohort 27/568 (4.8%) experienced ACS (median follow-up 6.7 years) with PAV ≥ 2.6% showing 92.6% sensitivity and 99.0% NPV for ACS. In the derivation cohort, 45.2% of patients had PAV < 2.6 vs. 4.3% with PAV 0% (no plaque) (P < 0.001) (validation cohort: 34.3% PAV < 2.6 vs. 2.6% PAV 0%; P < 0.001). Patients with PAV ≥ 2.6% had higher adjusted ACS rates in the derivation [Hazard ratio (HR) 4.65, 95% confidence interval (CI) 2.33-9.28, P < 0.001] and validation cohort (HR 7.31, 95% CI 1.62-33.08, P = 0.010), respectively. This study suggests that PAV up to 2.6% quantified by AI is associated with low-ACS risk in two independent patient cohorts. This cut-off may be helpful for clinical application of AI-guided CCTA analysis, which detects any plaque in up to 96-97% of patients.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.

Development of a deep learning algorithm for detecting significant coronary artery stenosis in whole-heart coronary magnetic resonance angiography.

Takafuji M, Ishida M, Shiomi T, Nakayama R, Fujita M, Yamaguchi S, Washiyama Y, Nagata M, Ichikawa Y, Inoue Katsuhiro RT, Nakamura S, Sakuma H

pubmed logopapersJun 30 2025
Whole-heart coronary magnetic resonance angiography (CMRA) enables noninvasive and accurate detection of coronary artery stenosis. Nevertheless, the visual interpretation of CMRA is constrained by the observer's experience, necessitating substantial training. The purposes of this study were to develop a deep learning (DL) algorithm using a deep convolutional neural network to accurately detect significant coronary artery stenosis in CMRA and to investigate the effectiveness of this DL algorithm as a tool for assisting in accurate detection of coronary artery stenosis. Nine hundred and fifty-one coronary segments from 75 patients who underwent both CMRA and invasive coronary angiography (ICA) were studied. Significant stenosis was defined as a reduction in luminal diameter of >50% on quantitative ICA. A DL algorithm was proposed to classify CMRA segments into those with and without significant stenosis. A 4-fold cross-validation method was used to train and test the DL algorithm. An observer study was then conducted using 40 segments with stenosis and 40 segments without stenosis. Three radiology experts and 3 radiology trainees independently rated the likelihood of the presence of stenosis in each coronary segment with a continuous scale from 0 to 1, first without the support of the DL algorithm, then using the DL algorithm. Significant stenosis was observed in 84 (8.8%) of the 951 coronary segments. Using the DL algorithm trained by the 4-fold cross-validation method, the area under the receiver operating characteristic curve (AUC) for the detection of segments with significant coronary artery stenosis was 0.890, with 83.3% sensitivity, 83.6% specificity and 83.6% accuracy. In the observer study, the average AUC of trainees was significantly improved using the DL algorithm (0.898) compared to that without the algorithm (0.821, p<0.001). The average AUC of experts tended to be higher with the DL algorithm (0.897), but not significantly different from that without the algorithm (0.879, p=0.082). We developed a DL algorithm offering high diagnostic accuracy for detecting significant coronary artery stenosis on CMRA. Our proposed DL algorithm appears to be an effective tool for assisting inexperienced observers to accurately detect coronary artery stenosis in whole-heart CMRA.

Automated Finite Element Modeling of the Lumbar Spine: A Biomechanical and Clinical Approach to Spinal Load Distribution and Stress Analysis.

Ahmadi M, Zhang X, Lin M, Tang Y, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 30 2025
Biomechanical analysis of the lumbar spine is vital for understanding load distribution and stress patterns under physiological conditions. Traditional finite element analysis (FEA) relies on time-consuming manual segmentation and meshing, leading to long runtimes and inconsistent accuracy. Automating this process improves efficiency and reproducibility. This study introduces an automated FEA methodology for lumbar spine biomechanics, integrating deep learning-based segmentation with computational modeling to streamline workflows from imaging to simulation. Medical imaging data were segmented using deep learning frameworks for vertebrae and intervertebral discs. Segmented structures were transformed into optimized surface meshes via Laplacian smoothing and decimation. Using the Gibbon library and FEBio, FEA models incorporated cortical and cancellous bone, nucleus, annulus, cartilage, and ligaments. Ligament attachments used spherical coordinate-based segmentation; vertebral endplates were extracted via principal component analysis (PCA) for cartilage modeling. Simulations assessed stress, strain, and displacement under axial rotation, extension, flexion, and lateral bending. The automated pipeline cut model preparation time by 97.9%, from over 24 hours to 30 minutes and 49.48 seconds. Biomechanical responses aligned with experimental and traditional FEA data, showing high posterior element loads in extension and flexion, consistent ligament forces, and disc deformations. The approach enhanced reproducibility with minimal manual input. This automated methodology provides an efficient, accurate framework for lumbar spine biomechanics, eliminating manual segmentation challenges. It supports clinical diagnostics, implant design, and rehabilitation, advancing computational and patient-specific spinal studies. Rapid simulations enhance implant optimization, and early detection of degenerative spinal issues, improving personalized treatment and research.

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.

Efficient Chest X-Ray Feature Extraction and Feature Fusion for Pneumonia Detection Using Lightweight Pretrained Deep Learning Models

Chandola, Y., Uniyal, V., Bachheti, Y.

medrxiv logopreprintJun 30 2025
Pneumonia is a respiratory condition characterized by inflammation of the alveolar sacs in the lungs, which disrupts normal oxygen exchange. This disease disproportionately impacts vulnerable populations, including young children (under five years of age) and elderly individuals (over 65 years), primarily due to their compromised immune systems. The mortality rate associated with pneumonia remains alarmingly high, particularly in low-resource settings where healthcare access is limited. Although effective prevention strategies exist, pneumonia continues to claim the lives of approximately one million children each year, earning its reputation as a "silent killer." Globally, an estimated 500 million cases are documented annually, underscoring its widespread public health burden. This study explores the design and evaluation of the CNN-based Computer-Aided Diagnostic (CAD) systems with an aim of carrying out competent as well as resourceful classification and categorization of chest radiographs into binary classes (Normal, Pneumonia). An augmented Kaggle dataset of 18,200 chest radiographs, split between normal and pneumonia cases, was utilized. This study conducts a series of experiments to evaluate lightweight CNN models--ShuffleNet, NASNet-Mobile, and EfficientNet-b0--using transfer learning that achieved accuracy of 90%, 88% and 89%, prompting the task for deep feature extraction from each of the networks and applying feature fusion to further pair it with SVM classifier and XGBoost classifier, achieving an accuracy of 97% and 98% resepectively. The proposed research emphasizes the crucial role of CAD systems in advancing radiological diagnostics, delivering effective solutions to aid radiologists in distinguishing between diagnoses by applying feature fusion, feature selection along with various machine learning algorithms and deep learning architectures.

Self-Supervised Multiview Xray Matching

Mohamad Dabboussi, Malo Huard, Yann Gousseau, Pietro Gori

arxiv logopreprintJun 30 2025
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.

$μ^2$Tokenizer: Differentiable Multi-Scale Multi-Modal Tokenizer for Radiology Report Generation

Siyou Li, Pengyao Qin, Huanan Wu, Dong Nie, Arun J. Thirunavukarasu, Juntao Yu, Le Zhang

arxiv logopreprintJun 30 2025
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $\mu^2$LLM, a $\underline{\textbf{mu}}$ltiscale $\underline{\textbf{mu}}$ltimodal large language models for RRG tasks. The novel ${\mu}^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasets demonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $\mu^2$LLMs on limited data for RRG tasks. At the same time, for prompt engineering, we introduce a five-stage, LLM-driven pipeline that converts routine CT reports into paired visual-question-answer triples and citation-linked reasoning narratives, creating a scalable, high-quality supervisory corpus for explainable multimodal radiology LLM. All code, datasets, and models will be publicly available in our official repository. https://github.com/Siyou-Li/u2Tokenizer

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.
Page 221 of 3933923 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.