Sort by:
Page 254 of 3703697 results

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).

MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction

Lingtong Zhang, Mengdie Song, Xiaohan Hao, Huayu Mai, Bensheng Qiu

arxiv logopreprintJun 30 2025
Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.

Three-dimensional end-to-end deep learning for brain MRI analysis

Radhika Juglan, Marta Ligero, Zunamys I. Carrero, Asier Rabasco, Tim Lenz, Leo Misera, Gregory Patrick Veldhuizen, Paul Kuntke, Hagen H. Kitzler, Sven Nebelung, Daniel Truhn, Jakob Nikolas Kather

arxiv logopreprintJun 30 2025
Deep learning (DL) methods are increasingly outperforming classical approaches in brain imaging, yet their generalizability across diverse imaging cohorts remains inadequately assessed. As age and sex are key neurobiological markers in clinical neuroscience, influencing brain structure and disease risk, this study evaluates three of the existing three-dimensional architectures, namely Simple Fully Connected Network (SFCN), DenseNet, and Shifted Window (Swin) Transformers, for age and sex prediction using T1-weighted MRI from four independent cohorts: UK Biobank (UKB, n=47,390), Dallas Lifespan Brain Study (DLBS, n=132), Parkinson's Progression Markers Initiative (PPMI, n=108 healthy controls), and Information eXtraction from Images (IXI, n=319). We found that SFCN consistently outperformed more complex architectures with AUC of 1.00 [1.00-1.00] in UKB (internal test set) and 0.85-0.91 in external test sets for sex classification. For the age prediction task, SFCN demonstrated a mean absolute error (MAE) of 2.66 (r=0.89) in UKB and 4.98-5.81 (r=0.55-0.70) across external datasets. Pairwise DeLong and Wilcoxon signed-rank tests with Bonferroni corrections confirmed SFCN's superiority over Swin Transformer across most cohorts (p<0.017, for three comparisons). Explainability analysis further demonstrates the regional consistency of model attention across cohorts and specific to each task. Our findings reveal that simpler convolutional networks outperform the denser and more complex attention-based DL architectures in brain image analysis by demonstrating better generalizability across different datasets.

Deep Learning-Based Semantic Segmentation for Real-Time Kidney Imaging and Measurements with Augmented Reality-Assisted Ultrasound

Gijs Luijten, Roberto Maria Scardigno, Lisle Faray de Paiva, Peter Hoyer, Jens Kleesiek, Domenico Buongiorno, Vitoantonio Bevilacqua, Jan Egger

arxiv logopreprintJun 30 2025
Ultrasound (US) is widely accessible and radiation-free but has a steep learning curve due to its dynamic nature and non-standard imaging planes. Additionally, the constant need to shift focus between the US screen and the patient poses a challenge. To address these issues, we integrate deep learning (DL)-based semantic segmentation for real-time (RT) automated kidney volumetric measurements, which are essential for clinical assessment but are traditionally time-consuming and prone to fatigue. This automation allows clinicians to concentrate on image interpretation rather than manual measurements. Complementing DL, augmented reality (AR) enhances the usability of US by projecting the display directly into the clinician's field of view, improving ergonomics and reducing the cognitive load associated with screen-to-patient transitions. Two AR-DL-assisted US pipelines on HoloLens-2 are proposed: one streams directly via the application programming interface for a wireless setup, while the other supports any US device with video output for broader accessibility. We evaluate RT feasibility and accuracy using the Open Kidney Dataset and open-source segmentation models (nnU-Net, Segmenter, YOLO with MedSAM and LiteMedSAM). Our open-source GitHub pipeline includes model implementations, measurement algorithms, and a Wi-Fi-based streaming solution, enhancing US training and diagnostics, especially in point-of-care settings.

Efficient Chest X-Ray Feature Extraction and Feature Fusion for Pneumonia Detection Using Lightweight Pretrained Deep Learning Models

Chandola, Y., Uniyal, V., Bachheti, Y.

medrxiv logopreprintJun 30 2025
Pneumonia is a respiratory condition characterized by inflammation of the alveolar sacs in the lungs, which disrupts normal oxygen exchange. This disease disproportionately impacts vulnerable populations, including young children (under five years of age) and elderly individuals (over 65 years), primarily due to their compromised immune systems. The mortality rate associated with pneumonia remains alarmingly high, particularly in low-resource settings where healthcare access is limited. Although effective prevention strategies exist, pneumonia continues to claim the lives of approximately one million children each year, earning its reputation as a "silent killer." Globally, an estimated 500 million cases are documented annually, underscoring its widespread public health burden. This study explores the design and evaluation of the CNN-based Computer-Aided Diagnostic (CAD) systems with an aim of carrying out competent as well as resourceful classification and categorization of chest radiographs into binary classes (Normal, Pneumonia). An augmented Kaggle dataset of 18,200 chest radiographs, split between normal and pneumonia cases, was utilized. This study conducts a series of experiments to evaluate lightweight CNN models--ShuffleNet, NASNet-Mobile, and EfficientNet-b0--using transfer learning that achieved accuracy of 90%, 88% and 89%, prompting the task for deep feature extraction from each of the networks and applying feature fusion to further pair it with SVM classifier and XGBoost classifier, achieving an accuracy of 97% and 98% resepectively. The proposed research emphasizes the crucial role of CAD systems in advancing radiological diagnostics, delivering effective solutions to aid radiologists in distinguishing between diagnoses by applying feature fusion, feature selection along with various machine learning algorithms and deep learning architectures.

ToolCAP: Novel Tools to improve management of paediatric Community-Acquired Pneumonia - a randomized controlled trial- Statistical Analysis Plan

Cicconi, S., Glass, T., Du Toit, J., Bresser, M., Dhalla, F., Faye, P. M., Lal, L., Langet, H., Manji, K., Moser, A., Ndao, M. A., Palmer, M., Tine, J. A. D., Van Hoving, N., Keitel, K.

medrxiv logopreprintJun 30 2025
The ToolCAP cohort study is a prospective, observational, multi-site platform study designed to collect harmonized, high-quality clinical, imaging, and biological data on children with IMCI-defined pneumonia in low- and middle-income countries (LMICs). The primary objective is to inform the development and validation of diagnostic and prognostic tools, including lung ultrasound (LUS), point-of-care biomarkers, and AI-based models, to improve pneumonia diagnosis, management, and antimicrobial stewardship. This statistical analysis plan (SAP) outlines the analytic strategy for describing the study population, assessing the performance of candidate diagnostic tools, and enabling data sharing in support of secondary research questions and AI model development. Children under 12 years presenting with suspected pneumonia are enrolled within 24 hours of presentation and undergo clinical assessment, digital auscultation, LUS, and optional biological sampling. Follow-up occurs on Day 8 and Day 29 to assess outcomes including recovery, treatment response, and complications. The SAP details variable definitions, data management strategies, and pre-specified analyses, including descriptive summaries, sensitivity and specificity of diagnostic tools against clinical reference standards, and exploratory subgroup analyses.

Brain Tumor Detection through Thermal Imaging and MobileNET

Roham Maiti, Debasmita Bhoumik

arxiv logopreprintJun 30 2025
Brain plays a crucial role in regulating body functions and cognitive processes, with brain tumors posing significant risks to human health. Precise and prompt detection is a key factor in proper treatment and better patient outcomes. Traditional methods for detecting brain tumors, that include biopsies, MRI, and CT scans often face challenges due to their high costs and the need for specialized medical expertise. Recent developments in machine learning (ML) and deep learning (DL) has exhibited strong capabilities in automating the identification and categorization of brain tumors from medical images, especially MRI scans. However, these classical ML models have limitations, such as high computational demands, the need for large datasets, and long training times, which hinder their accessibility and efficiency. Our research uses MobileNET model for efficient detection of these tumors. The novelty of this project lies in building an accurate tumor detection model which use less computing re-sources and runs in less time followed by efficient decision making through the use of image processing technique for accurate results. The suggested method attained an average accuracy of 98.5%.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.
Page 254 of 3703697 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.