Sort by:
Page 166 of 1731730 results

Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence

Ziyuan Huang, Kevin Huggins, Srikar Bellur

arxiv logopreprintMay 7 2025
Our study presents PNN-UNet as a method for constructing deep neural networks that replicate the planarian neural network (PNN) structure in the context of 3D medical image data. Planarians typically have a cerebral structure comprising two neural cords, where the cerebrum acts as a coordinator, and the neural cords serve slightly different purposes within the organism's neurological system. Accordingly, PNN-UNet comprises a Deep-UNet and a Wide-UNet as the nerve cords, with a densely connected autoencoder performing the role of the brain. This distinct architecture offers advantages over both monolithic (UNet) and modular networks (Ensemble-UNet). Our outcomes on a 3D MRI hippocampus dataset, with and without data augmentation, demonstrate that PNN-UNet outperforms the baseline UNet and several other UNet variants in image segmentation.

3D Brain MRI Classification for Alzheimer Diagnosis Using CNN with Data Augmentation

Thien Nhan Vo, Bac Nam Ho, Thanh Xuan Truong

arxiv logopreprintMay 7 2025
A three-dimensional convolutional neural network was developed to classify T1-weighted brain MRI scans as healthy or Alzheimer. The network comprises 3D convolution, pooling, batch normalization, dense ReLU layers, and a sigmoid output. Using stochastic noise injection and five-fold cross-validation, the model achieved test set accuracy of 0.912 and area under the ROC curve of 0.961, an improvement of approximately 0.027 over resizing alone. Sensitivity and specificity both exceeded 0.90. These results align with prior work reporting up to 0.10 gain via synthetic augmentation. The findings demonstrate the effectiveness of simple augmentation for 3D MRI classification and motivate future exploration of advanced augmentation methods and architectures such as 3D U-Net and vision transformers.

Cross-organ all-in-one parallel compressed sensing magnetic resonance imaging

Baoshun Shi, Zheng Liu, Xin Meng, Yan Yang

arxiv logopreprintMay 7 2025
Recent advances in deep learning-based parallel compressed sensing magnetic resonance imaging (p-CSMRI) have significantly improved reconstruction quality. However, current p-CSMRI methods often require training separate deep neural network (DNN) for each organ due to anatomical variations, creating a barrier to developing generalized medical image reconstruction systems. To address this, we propose CAPNet (cross-organ all-in-one deep unfolding p-CSMRI network), a unified framework that implements a p-CSMRI iterative algorithm via three specialized modules: auxiliary variable module, prior module, and data consistency module. Recognizing that p-CSMRI systems often employ varying sampling ratios for different organs, resulting in organ-specific artifact patterns, we introduce an artifact generation submodule, which extracts and integrates artifact features into the data consistency module to enhance the discriminative capability of the overall network. For the prior module, we design an organ structure-prompt generation submodule that leverages structural features extracted from the segment anything model (SAM) to create cross-organ prompts. These prompts are strategically incorporated into the prior module through an organ structure-aware Mamba submodule. Comprehensive evaluations on a cross-organ dataset confirm that CAPNet achieves state-of-the-art reconstruction performance across multiple anatomical structures using a single unified model. Our code will be published at https://github.com/shibaoshun/CAPNet.

Opinions and preferences regarding artificial intelligence use in healthcare delivery: results from a national multi-site survey of breast imaging patients.

Dontchos BN, Dodelzon K, Bhole S, Edmonds CE, Mullen LA, Parikh JR, Daly CP, Epling JA, Christensen S, Grimm LJ

pubmed logopapersMay 6 2025
Artificial intelligence (AI) utilization is growing, but patient perceptions of AI are unclear. Our objective was to understand patient perceptions of AI through a multi-site survey of breast imaging patients. A 36-question survey was distributed to eight US practices (6 academic, 2 non-academic) from October 2023 through October 2024. This manuscript analyzes a subset of questions from the survey addressing digital health literacy and attitudes towards AI in medicine and breast imaging specifically. Multivariable analysis compared responses by respondent demographics. A total of 3,532 surveys were collected (response rate: 69.9%, 3,532/5053). Median respondent age was 55 years (IQR 20). Most respondents were White (73.0%, 2579/3532) and had completed college (77.3%, 2732/3532). Overall, respondents were undecided (range: 43.2%-50.8%) regarding questions about general perceptions of AI in healthcare. Respondents with higher electronic health literacy, more education, and younger age were significantly more likely to consider it useful to use utilize AI for aiding medical tasks (all p<0.001). In contrast, respondents with lower electronic health literacy and less education were significantly more likely to indicate it was a bad idea for AI to perform medical tasks (p<0.001). Non-White patients were more likely to express concerns that AI will not work as well for some groups compared to others (p<0.05). Overall, favorable opinions of AI use for medical tasks were associated with younger age, more education, and higher electronic health literacy. As AI is increasingly implemented into clinical workflows, it is important to educate patients and provide transparency to build patient understanding and trust.

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.

New Targets for Imaging in Nuclear Medicine.

Brink A, Paez D, Estrada Lobato E, Delgado Bolton RC, Knoll P, Korde A, Calapaquí Terán AK, Haidar M, Giammarile F

pubmed logopapersMay 6 2025
Nuclear medicine is rapidly evolving with new molecular imaging targets and advanced computational tools that promise to enhance diagnostic precision and personalized therapy. Recent years have seen a surge in novel PET and SPECT tracers, such as those targeting prostate-specific membrane antigen (PSMA) in prostate cancer, fibroblast activation protein (FAP) in tumor stroma, and tau protein in neurodegenerative disease. These tracers enable more specific visualization of disease processes compared to traditional agents, fitting into a broader shift toward precision imaging in oncology and neurology. In parallel, artificial intelligence (AI) and machine learning techniques are being integrated into tracer development and image analysis. AI-driven methods can accelerate radiopharmaceutical discovery, optimize pharmacokinetic properties, and assist in interpreting complex imaging datasets. This editorial provides an expanded overview of emerging imaging targets and techniques, including theranostic applications that pair diagnosis with radionuclide therapy, and examines how AI is augmenting nuclear medicine. We discuss the implications of these advancements within the field's historical trajectory and address the regulatory, manufacturing, and clinical challenges that must be navigated. Innovations in molecular targeting and AI are poised to transform nuclear medicine practice, enabling more personalized diagnostics and radiotheranostic strategies in the era of precision healthcare.

Multi-task learning for joint prediction of breast cancer histological indicators in dynamic contrast-enhanced magnetic resonance imaging.

Sun R, Li X, Han B, Xie Y, Nie S

pubmed logopapersMay 6 2025
Achieving efficient analysis of multiple pathological indicators has great significance for breast cancer prognosis and therapeutic decision-making. In this study, we aim to explore a deep multi-task learning (MTL) framework for collaborative prediction of histological grade and proliferation marker (Ki-67) status in breast cancer using multi-phase dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In the novel design of hybrid multi-task architecture (HMT-Net), co-representative features are explicitly distilled using a feature extraction backbone. A customized prediction network is then introduced to perform soft-parameter sharing between two correlated tasks. Specifically, task-common and task-specific knowledge is transmitted into tower layers for informative interactions. Furthermore, low-level feature maps containing tumor edges and texture details are recaptured by a hard-parameter sharing branch, which are then incorporated into the tower layer for each subtask. Finally, the probabilities of two histological indicators, predicted in the multi-phase DCE-MRI, are separately fused using a decision-level fusion strategy. Experimental results demonstrate that the proposed HMT-Net achieves optimal discriminative performance over other recent MTL architectures and deep models based on single image series, with the area under the receiver operating characteristic curve of 0.908 for tumor grade and 0.694 for Ki-67 status. Benefiting from the innovative HMT-Net, our proposed method elucidates its strong robustness and flexibility in the collaborative prediction task of breast biomarkers. Multi-phase DCE-MRI is expected to contribute valuable dynamic information for breast cancer pathological assessment in a non-invasive manner.

Real-time brain tumour diagnoses using a novel lightweight deep learning model.

Alnageeb MHO, M H S

pubmed logopapersMay 6 2025
Brain tumours continue to be a primary cause of worldwide death, highlighting the critical need for effective and accurate diagnostic tools. This article presents MK-YOLOv8, an innovative lightweight deep learning framework developed for the real-time detection and categorization of brain tumours from MRI images. Based on the YOLOv8 architecture, the proposed model incorporates Ghost Convolution, the C3Ghost module, and the SPPELAN module to improve feature extraction and substantially decrease computational complexity. An x-small object detection layer has been added, supporting precise detection of small and x-small tumours, which is crucial for early diagnosis. Trained on the Figshare Brain Tumour (FBT) dataset comprising (3,064) MRI images, MK-YOLOv8 achieved a mean Average Precision (mAP) of 99.1% at IoU (0.50) and 88.4% at IoU (0.50-0.95), outperforming YOLOv8 (98% and 78.8%, respectively). Glioma recall improved by 26%, underscoring the enhanced sensitivity to challenging tumour types. With a computational footprint of only 96.9 GFLOPs (representing 37.5% of YOYOLOv8x'sFLOPs) and utilizing 12.6 million parameters, a mere 18.5% of YOYOLOv8's parameters, MK-YOLOv8 delivers high efficiency with reduced resource demands. Also, it trained on the Br35H dataset (801 images) to guarantee the model's robustness and generalization; it achieved a mAP of 98.6% at IoU (0.50). The suggested model operates at 62 frames per second (FPS) and is suited for real-time clinical processes. These developments establish MK-YOLOv8 as an innovative framework, overcoming challenges in tiny tumour identification and providing a generalizable, adaptable, and precise detection approach for brain tumour diagnostics in clinical settings.

Keypoint localization and parameter measurement in ultrasound biomicroscopy anterior segment images based on deep learning.

Qinghao M, Sheng Z, Jun Y, Xiaochun W, Min Z

pubmed logopapersMay 6 2025
Accurate measurement of anterior segment parameters is crucial for diagnosing and managing ophthalmic conditions, such as glaucoma, cataracts, and refractive errors. However, traditional clinical measurement methods are often time-consuming, labor-intensive, and susceptible to inaccuracies. With the growing potential of artificial intelligence in ophthalmic diagnostics, this study aims to develop and evaluate a deep learning model capable of automatically extracting key points and precisely measuring multiple clinically significant anterior segment parameters from ultrasound biomicroscopy (UBM) images. These parameters include central corneal thickness (CCT), anterior chamber depth (ACD), pupil diameter (PD), angle-to-angle distance (ATA), sulcus-to-sulcus distance (STS), lens thickness (LT), and crystalline lens rise (CLR). A data set of 716 UBM anterior segment images was collected from Tianjin Medical University Eye Hospital. YOLOv8 was utilized to segment four key anatomical structures: cornea-sclera, anterior chamber, pupil, and iris-ciliary body-thereby enhancing the accuracy of keypoint localization. Only images with intact posterior capsule lentis were selected to create an effective data set for parameter measurement. Ten keypoints were localized across the data set, allowing the calculation of seven essential parameters. Control experiments were conducted to evaluate the impact of segmentation on measurement accuracy, with model predictions compared against clinical gold standards. The segmentation model achieved a mean IoU of 0.8836 and mPA of 0.9795. Following segmentation, the binary classification model attained an mAP of 0.9719, with a precision of 0.9260 and a recall of 0.9615. Keypoint localization exhibited a Euclidean distance error of 58.73 ± 63.04 μm, improving from the pre-segmentation error of 71.57 ± 67.36 μm. Localization mAP was 0.9826, with a precision of 0.9699, a recall of 0.9642 and an FPS of 32.64. In addition, parameter error analysis and Bland-Altman plots demonstrated improved agreement with clinical gold standards after segmentation. This deep learning approach for UBM image segmentation, keypoint localization, and parameter measurement is feasible, enhancing clinical diagnostic efficiency for anterior segment parameters.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.
Page 166 of 1731730 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.