Sort by:
Page 256 of 6576562 results

Zhijun Zeng, Youjia Zheng, Chang Su, Qianhang Wu, Hao Hu, Zeyuan Dong, Shan Gao, Yang Lv, Rui Tang, Ligang Cui, Zhiyong Hou, Weijun Lin, Zuoqiang Shi, Yubing Li, He Sun

arxiv logopreprintAug 17 2025
Ultrasound computed tomography (USCT) is a radiation-free, high-resolution modality but remains limited for musculoskeletal imaging due to conventional ray-based reconstructions that neglect strong scattering. We propose a generative neural physics framework that couples generative networks with physics-informed neural simulation for fast, high-fidelity 3D USCT. By learning a compact surrogate of ultrasonic wave propagation from only dozens of cross-modality images, our method merges the accuracy of wave modeling with the efficiency and stability of deep learning. This enables accurate quantitative imaging of in vivo musculoskeletal tissues, producing spatial maps of acoustic properties beyond reflection-mode images. On synthetic and in vivo data (breast, arm, leg), we reconstruct 3D maps of tissue parameters in under ten minutes, with sensitivity to biomechanical properties in muscle and bone and resolution comparable to MRI. By overcoming computational bottlenecks in strongly scattering regimes, this approach advances USCT toward routine clinical assessment of musculoskeletal disease.

Nugraha DJ, Yudistira N, Widodo AW

pubmed logopapersAug 17 2025
Glaucoma is a leading cause of irreversible vision loss in ophthalmology, primarily resulting from damage to the optic nerve. Early detection is crucial but remains challenging due to the inherent class imbalance in glaucoma fundus image datasets. This study addresses this limitation by applying a weighted loss function to Convolutional Neural Networks (CNNs), evaluated on the standardized SMDG-19 dataset, which integrates data from 19 publicly available sources. Key performance metrics including recall, F1-score, precision, accuracy, and AUC were analyzed, and interpretability was assessed using Grad-CAM.The results demonstrate that recall increased from 60.3% to 87.3%, representing a relative improvement of 44.75%, while F1-score improved from 66.5% to 71.4% (+7.25%). Minor trade-offs were observed in precision, which declined from 74.5% to 69.6% (-6.53%), and in accuracy, which dropped from 84.2% to 80.7% (-4.10%). In contrast, AUC rose from 84.2% to 87.4%, reflecting a relative gain of 3.21%. Grad-CAM visualizations showed consistent focus on clinically relevant regions of the optic nerve head, underscoring the effectiveness of the weighted loss strategy in improving both the performance and interpretability of CNN-based glaucoma detection systems.

Gialias P, Wiberg MK, Brehl AK, Bjerner T, Gustafsson H

pubmed logopapersAug 17 2025
BackgroundArtificial intelligence (AI)-based systems have the potential to increase the efficiency and effectiveness of breast cancer screening programs but need to be carefully validated before clinical implementation.PurposeTo retrospectively evaluate an AI system to safely reduce the workload of a double-reading breast cancer screening program.Material and MethodsAll digital mammography (DM) screening examinations of women aged 40-74 years between August 2021 and January 2022 in Östergötland, Sweden were included. Analysis of the interval cancers (ICs) was performed in 2024. Each examination was double-read by two breast radiologists and processed by the AI system, which assigned a score of 1-10 to each examination based on increasing likelihood of cancer. In a retrospective simulation, the AI system was used for triaging; low-risk examinations (score 1-7) were selected for single reading and high-risk examinations (score 8-10) for double reading.ResultsA total of 15,468 DMs were included. Using an AI triaging strategy, 10,473 (67.7%) examinations received scores of 1-7, resulting in a 34% workload reduction. Overall, 52/53 screen-detected cancers were assigned a score of 8-10 by the AI system. One cancer was missed by the AI system (score 4) but was detected by the radiologists. In total, 11 cases of IC were found in the 2024 analysis.ConclusionReplacing one reader in breast cancer screening with an AI system for low-risk cases could safely reduce workload by 34%. In total, 11 cases of IC were found in the 2024 analysis; of them, three were identified correctly by the AI system at the 2021-2022 examination.

Vemu R, Birhiray D, Darwish B, Hollis R, Unnam S, Chilukuri S, Deveza L

pubmed logopapersAug 17 2025
Advances in computer vision and machine learning have augmented the ability to analyze orthopedic radiographs. A critical but underexplored component of this process is the accurate classification of radiographic views and localization of relevant anatomical regions, both of which can impact the performance of downstream diagnostic models. This study presents a deep learning object detection model and mobile application designed to classify distal radius radiographs into standard views-anterior-posterior (AP), lateral (LAT), and oblique (OB)- while localizing the anatomical region most relevant to distal radius fractures. A total of 1593 deidentified radiographs were collected from a single institution between 2021 and 2023 (544 AP, 538 LAT, and 521 OB). Each image was annotated using Labellerr software to draw bounding boxes encompassing the region spanning from the second digit MCP joint to the distal third of the radius, with annotations verified by an experienced orthopedic surgeon. A YOLOv5 object detection model was fine-tuned and trained using a 70/15/15 train/validation/test split. The model achieved an overall accuracy of 97.3%, with class-specific accuracies of 99% for AP, 100% for LAT, and 93% for OB. Overall precision and recall were 96.8% and 97.5%, respectively. Model performance exceeded the expected accuracy from random guessing (p < 0.001, binomial test). A Streamlit-based mobile application was developed to support clinical deployment. This automated view classification step reduces feature space by isolating only the relevant anatomy. Focusing subsequent models on the targeted region can minimize distraction from irrelevant areas and improve the accuracy of downstream fracture classification models.

Sam Kumar GV, T RK

pubmed logopapersAug 17 2025
One of today's major health threats is brain tumours, yet current systems focus mainly on diagnostic methods and medical imaging to understand them. Here, the Shepard Quantum Dilated Forward Harmonic Net (ShQDFHNet) is developed for brain tumour detection using MRI scans. It starts by enhancing images with high boost filtering to highlight key features, then uses Log-Cosh Point-Wise Pyramid Attention Network (Log-Cosh PPANet) for accurate tumour segmentation, guided by a refined Log-Cosh Dice Loss. To capture texture details, features like Spatial Grey-Level Dependence Matrix (SGLDM) and Gray-Level Co-occurrence Matrix (GLCM) are extracted. The final detection uses ShQDFHNet, combining Shepard Convolutional Neural Network (ShCNN) and Quantum Dilated Convolutional Neural Network (QDCNN), with layers enhanced by a Forward Harmonic Analysis Network. ShQDFHNet achieved strong performance on the Brain Tumour MRI dataset, with 90.69% accuracy, 91.14% True Positive Rate (TPR), and 90.61% True Negative Rate (TNR) using K-fold of 9. The use of high boost filtering, Log-Cosh PPANet, and texture-based features improves the input data quality and enables accurate tumor segmentation in MRI scans. The proposed ShQDFHNet model improves feature learning and achieves strong performance on brain tumor MRI data.

Yucheng Tang, Pawel Rajwa, Alexander Ng, Yipei Wang, Wen Yan, Natasha Thorley, Aqua Asif, Clare Allen, Louise Dickinson, Francesco Giganti, Shonit Punwani, Daniel C. Alexander, Veeru Kasivisvanathan, Yipeng Hu

arxiv logopreprintAug 16 2025
Foundation models in medical imaging have shown promising label efficiency, achieving high downstream performance with only a fraction of annotated data. Here, we evaluate this in prostate multiparametric MRI using ProFound, a domain-specific vision foundation model pretrained on large-scale prostate MRI datasets. We investigate how variable image quality affects label-efficient finetuning by measuring the generalisability of finetuned models. Experiments systematically vary high-/low-quality image ratios in finetuning and evaluation sets. Our findings indicate that image quality distribution and its finetune-and-test mismatch significantly affect model performance. In particular: a) Varying the ratio of high- to low-quality images between finetuning and test sets leads to notable differences in downstream performance; and b) The presence of sufficient high-quality images in the finetuning set is critical for maintaining strong performance, whilst the importance of matched finetuning and testing distribution varies between different downstream tasks, such as automated radiology reporting and prostate cancer detection.When quality ratios are consistent, finetuning needs far less labeled data than training from scratch, but label efficiency depends on image quality distribution. Without enough high-quality finetuning data, pretrained models may fail to outperform those trained without pretraining. This highlights the importance of assessing and aligning quality distributions between finetuning and deployment, and the need for quality standards in finetuning data for specific downstream tasks. Using ProFound, we show the value of quantifying image quality in both finetuning and deployment to fully realise the data and compute efficiency benefits of foundation models.

Rakesh Thakur, Yusra Tariq

arxiv logopreprintAug 16 2025
Solving tough clinical questions that require both image and text understanding is still a major challenge in healthcare AI. In this work, we propose Q-FSRU, a new model that combines Frequency Spectrum Representation and Fusion (FSRU) with a method called Quantum Retrieval-Augmented Generation (Quantum RAG) for medical Visual Question Answering (VQA). The model takes in features from medical images and related text, then shifts them into the frequency domain using Fast Fourier Transform (FFT). This helps it focus on more meaningful data and filter out noise or less useful information. To improve accuracy and ensure that answers are based on real knowledge, we add a quantum-inspired retrieval system. It fetches useful medical facts from external sources using quantum-based similarity techniques. These details are then merged with the frequency-based features for stronger reasoning. We evaluated our model using the VQA-RAD dataset, which includes real radiology images and questions. The results showed that Q-FSRU outperforms earlier models, especially on complex cases needing image-text reasoning. The mix of frequency and quantum information improves both performance and explainability. Overall, this approach offers a promising way to build smart, clear, and helpful AI tools for doctors.

Shapiro DD, Abel EJ, Albiges L, Battle D, Berg SA, Campbell MT, Cella D, Coleman K, Garmezy B, Geynisman DM, Hall T, Henske EP, Jonasch E, Karam JA, La Rosa S, Leibovich BC, Maranchie JK, Master VA, Maughan BL, McGregor BA, Msaouel P, Pal SK, Perez J, Plimack ER, Psutka SP, Riaz IB, Rini BI, Shuch B, Simon MC, Singer EA, Smith A, Staehler M, Tang C, Tannir NM, Vaishampayan U, Voss MH, Zakharia Y, Zhang Q, Zhang T, Carlo MI

pubmed logopapersAug 16 2025
Accurate prognostication and personalized treatment selection remain major challenges in kidney cancer. This consensus initiative aimed to provide actionable expert guidance on the development and clinical integration of prognostic and predictive biomarkers and risk stratification tools to improve patient care and guide future research. A modified Delphi method was employed to develop consensus statements among a multidisciplinary panel of experts in urologic oncology, medical oncology, radiation oncology, pathology, molecular biology, radiology, outcomes research, biostatistics, industry, and patient advocacy. Over 3 rounds, including an in-person meeting 20 initial statements were evaluated, refined, and voted on. Consensus was defined a priori as a median Likert score ≥8. Nineteen final consensus statements were endorsed. These span key domains including biomarker prioritization (favoring prognostic biomarkers), rigorous methodology for subgroup and predictive analyses, the development of multi-institutional prospective registries, incorporation of biomarkers in trial design, and improvements in data/biospecimen access. The panel also identified high-priority biomarker types (e.g., AI-based image analysis, ctDNA) for future research. This is the first consensus statement specifically focused on biomarker and risk model development for kidney cancer using a structured Delphi process. The recommendations emphasize the need for rigorous methodology, collaborative infrastructure, prospective data collection, and focus on clinically translatable biomarkers. The resulting framework is intended to guide researchers, cooperative groups, and stakeholders in advancing personalized care for patients with kidney cancer.

Zgool T, Antico M, Edwards C, Fontanarosa D

pubmed logopapersAug 16 2025
Abdominal haemorrhage is a life-threatening condition requiring prompt detection to enable timely intervention. Conventional ultrasound (US) is widely used but is highly operator-dependent, limiting its reliability outside clinical settings. In anatomical regions, in particular Morison's Pouch, US provides a higher detection reliability due to the preferential accumulation of free fluid in dependent areas. Recent advancements in artificial intelligence (AI)-integrated point-of-care US (POCUS) systems show promise for use in emergency, pre-hospital, military, and resource-limited environments. This systematic review evaluates the performance of AI-driven POCUS systems for detecting and estimating abdominal haemorrhage. A systematic search of Scopus, PubMed, EMBASE, and Web of Science (2014-2024) identified seven studies with sample sizes ranging from 94 to 6608 images and patient numbers ranging between 78 and 864 trauma patients. AI models, including YOLOv3, U-Net, and ResNet50, demonstrated high diagnostic accuracy, with sensitivity ranging from 88% to 98% and specificity from 68% to 99%. Most studies utilized 2D US imaging and conducted internal validation, typically employing systems such as the Philips Lumify and Mindray TE7. Model performance was predominantly assessed using internal datasets, wherein training and evaluation were performed on the same dataset. Of particular note, only one study validated its model on an independent dataset obtained from a different clinical setting. This limited use of external validation restricts the ability to evaluate the applicability of AI models across diverse populations and varying imaging conditions. Moreover, the Focused Assessment with Sonography in Trauma (FAST) is a protocol drive US method for detecting free fluid in the abdominal cavity, primarily in trauma cases. However, while it is commonly used to assess the right upper quadrant, particularly Morison's pouch, which is gravity-dependent and sensitive for early haemorrhage its application to other abdominal regions, such as the left upper quadrant and pelvis, remains underexplored. This is clinically significant, as fluid may preferentially accumulate in these areas depending on the mechanism of injury, patient positioning, or time since trauma, underscoring the need for broader anatomical coverage in AI applications. Researchers aiming to address the current reliance on 2D imaging and the limited use of external validation should focus future studies on integrating 3D imaging and utilising diverse, multicentre datasets to improve the reliability and generalizability of AI-driven POCUS systems for haemorrhage detection in trauma care.

Omidi A, Shamaei A, Aktar M, King R, Leijser L, Souza R

pubmed logopapersAug 16 2025
Skull-stripping is an essential preprocessing step in the analysis of brain Magnetic Resonance Imaging (MRI). While deep learning-based methods have shown success with this task, strong domain shifts between adult and newborn brain MR images complicate model transferability. We previously developed unsupervised domain adaptation techniques to address the domain shift between these data, without requiring newborn MRI data to be labeled. In this work, we build upon our previous domain adaptation framework by extensively expanding the training and validation datasets using weakly labeled newborn MRI scans from the Developing Human Connectome Project (dHCP), our private newborn dataset, and synthetic data generated by a Gaussian Mixture Model (GMM). While the core model architecture remains similar, we focus on validating the model's generalization across four diverse domains, adult, synthetic, public newborn, and private newborn MRI, demonstrating improved performance and robustness over our prior methods. These results highlight the impact of incorporating broader training data under weak supervision for newborn brain imaging analysis. The experimental results reveal that our proposed approach outperforms our previous work achieving a Dice coefficient of 0.9509±0.0055 and a Hausdorff distance of 3.0883±0.1833 for newborn MRI data, surpassing state-of-the-art models such as SynthStrip (Dice =0.9412±0.0063, Hausdorff =3.1570±0.1389). These results reveal that including weakly labeled newborn data results in improvements in model performance and generalization and is useful for newborn brain imaging analysis. Our code is available at: https://github.com/abbasomidi77/Weakly-Supervised-DAUnet.
Page 256 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.