Sort by:
Page 294 of 3463455 results

Artificial intelligence medical image-aided diagnosis system for risk assessment of adjacent segment degeneration after lumbar fusion surgery.

Dai B, Liang X, Dai Y, Ding X

pubmed logopapersJun 1 2025
The existing assessment of adjacent segment degeneration (ASD) risk after lumbar fusion surgery focuses on a single type of clinical information or imaging manifestations. In the early stages, it is difficult to show obvious degeneration characteristics, and the patients' true risks cannot be fully revealed. The evaluation results based on imaging ignore the clinical symptoms and changes in quality of life of patients, limiting the understanding of the natural process of ASD and the comprehensive assessment of its risk factors, and hindering the development of effective prevention strategies. To improve the quality of postoperative management and effectively identify the characteristics of ASD, this paper studies the risk assessment of ASD after lumbar fusion surgery by combining the artificial intelligence (AI) medical image-aided diagnosis system. First, the collaborative attention mechanism is adopted to start with the extraction of single-modal features and fuse the multi-modal features of computed tomography (CT) and magnetic resonance imaging (MRI) images. Then, the similarity matrix is weighted to achieve the complementarity of multi-modal information, and the stability of feature extraction is improved through the residual network structure. Finally, the fully connected network (FCN) is combined with the multi-task learning framework to provide a more comprehensive assessment of the risk of ASD. The experimental analysis results show that compared with three advanced models, three dimensional-convolutional neural networks (3D-CNN), U-Net++, and deep residual networks (DRN), the accuracy of the model in this paper is 3.82 %, 6.17 %, and 6.68 % higher respectively; the precision is 0.56 %, 1.09 %, and 4.01 % higher respectively; the recall is 3.41 %, 4.85 %, and 5.79 % higher respectively. The conclusion shows that the AI medical image-aided diagnosis system can help to accurately identify the characteristics of ASD and effectively assess the risks after lumbar fusion surgery.

Mexican dataset of digital mammograms (MEXBreast) with suspicious clusters of microcalcifications.

Lozoya RSL, Barragán KN, Domínguez HJO, Azuela JHS, Sánchez VGC, Villegas OOV

pubmed logopapersJun 1 2025
Breast cancer is one of the most prevalent cancers affecting women worldwide. Early detection and treatment are crucial in significantly reducing mortality rates Microcalcifications (MCs) are of particular importance among the various breast lesions. These tiny calcium deposits within breast tissue are present in approximately 30% of malignant tumors and can serve as critical indirect indicators of early-stage breast cancer. Three or more MCs within an area of 1 cm² are considered a Microcalcification Cluster (MCC) and assigned a BI-RADS category 4, indicating a suspicion of malignancy. Mammography is the most used technique for breast cancer detection. Approximately one in two mammograms showing MCCs is confirmed as cancerous through biopsy. MCCs are challenging to detect, even for experienced radiologists, underscoring the need for computer-aided detection tools such as Convolutional Neural Networks (CNNs). CNNs require large amounts of domain-specific data with consistent resolutions for effective training. However, most publicly available mammogram datasets either lack resolution information or are compiled from heterogeneous sources. Additionally, MCCs are often either unlabeled or sparsely represented in these datasets, limiting their utility for training CNNs. In this dataset, we present the MEXBreast, an annotated MCCs Mexican digital mammogram database, containing images from resolutions of 50, 70, and 100 microns. MEXBreast aims to support the training, validation, and testing of deep learning CNNs.

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.

<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.

Catania R, Mukherjee A, Chamberlin JH, Calle F, Philomina P, Mastrodicasa D, Allen BD, Suchá D, Abbara S, Hanneman K

pubmed logopapersJun 1 2025
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.

A Machine Learning Algorithm to Estimate the Probability of a True Scaphoid Fracture After Wrist Trauma.

Bulstra AEJ

pubmed logopapersJun 1 2025
To identify predictors of a true scaphoid fracture among patients with radial wrist pain following acute trauma, train 5 machine learning (ML) algorithms in predicting scaphoid fracture probability, and design a decision rule to initiate advanced imaging in high-risk patients. Two prospective cohorts including 422 patients with radial wrist pain following wrist trauma were combined. There were 117 scaphoid fractures (28%) confirmed on computed tomography, magnetic resonance imaging, or radiographs. Eighteen fractures (15%) were occult. Predictors of a scaphoid fracture were identified among demographics, mechanism of injury and examination maneuvers. Five ML-algorithms were trained in calculating scaphoid fracture probability. ML-algorithms were assessed on ability to discriminate between patients with and without a fracture (area under the receiver operating characteristic curve), agreement between observed and predicted probabilities (calibration), and overall performance (Brier score). The best performing ML-algorithm was incorporated into a probability calculator. A decision rule was proposed to initiate advanced imaging among patients with negative radiographs. Pain over the scaphoid on ulnar deviation, sex, age, and mechanism of injury were most strongly associated with a true scaphoid fracture. The best performing ML-algorithm yielded an area under the receiver operating characteristic curve, calibration slope, intercept, and Brier score of 0.77, 0.84, -0.01 and 0.159, respectively. The ML-derived decision rule proposes to initiate advanced imaging in patients with radial-sided wrist pain, negative radiographs, and a fracture probability of ≥10%. When applied to our cohort, this would yield 100% sensitivity, 38% specificity, and would have reduced the number of patients undergoing advanced imaging by 36% without missing a fracture. The ML-algorithm accurately calculated scaphoid fracture probability based on scaphoid pain on ulnar deviation, sex, age, and mechanism of injury. The ML-decision rule may reduce the number of patients undergoing advanced imaging by a third with a small risk of missing a fracture. External validation is required before implementation. Diagnostic II.

FeaInfNet: Diagnosis of Medical Images With Feature-Driven Inference and Visual Explanations.

Peng Y, He L, Hu D, Liu Y, Yang L, Shang S

pubmed logopapersJun 1 2025
Interpretable deep-learning models have received widespread attention in the field of image recognition. However, owing to the coexistence of medical-image categories and the challenge of identifying subtle decision-making regions, many proposed interpretable deep-learning models suffer from insufficient accuracy and interpretability in diagnosing images of medical diseases. Therefore, this study proposed a feature-driven inference network (FeaInfNet) that incorporates a feature-based network reasoning structure. Specifically, local feature masks (LFM) were developed to extract feature vectors, thereby providing global information for these vectors and enhancing the expressive ability of FeaInfNet. Second, FeaInfNet compares the similarity of the feature vector corresponding to each subregion image patch with the disease and normal prototype templates that may appear in the region. It then combines the comparison of each subregion when making the final diagnosis. This strategy simulates the diagnosis process of doctors, making the model interpretable during the reasoning process, while avoiding misleading results caused by the participation of normal areas during reasoning. Finally, we proposed adaptive dynamic masks (Adaptive-DM) to interpret feature vectors and prototypes into human-understandable image patches to provide an accurate visual interpretation. Extensive experiments on multiple publicly available medical datasets, including RSNA, iChallenge-PM, COVID-19, ChinaCXRSet, MontgomerySet, and CBIS-DDSM, demonstrated that our method achieves state-of-the-art classification accuracy and interpretability compared with baseline methods in the diagnosis of medical images. Additional ablation studies were performed to verify the effectiveness of each component.

Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence pre-annotation.

Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, KeWei Y, ZhiKai L, Cheng H, KeLan Q, XiHao C, WenFei D, Ping H, RunYu W, Ying Y, XiaoHui B

pubmed logopapersJun 1 2025
This study investigates the feasibility of reducing manual image annotation costs in medical image database construction by utilizing a step by step approach where the Artificial Intelligence model (AI model) trained on a previous batch of data automatically pre-annotates the next batch of image data, taking ultrasound image of thyroid nodule annotation as an example. The study used YOLOv8 as the AI model. During the AI model training, in addition to conventional image augmentation techniques, augmentation methods specifically tailored for ultrasound images were employed to balance the quantity differences between thyroid nodule classes and enhance model training effectiveness. The study found that training the model with augmented data significantly outperformed training with raw images data. When the number of original images number was only 1,360, with 7 thyroid nodule classifications, pre-annotation using the AI model trained on augmented data could save at least 30% of the manual annotation workload for junior physicians. When the scale of original images number reached 6,800, the classification accuracy of the AI model trained on augmented data was very close with that of junior physicians, eliminating the need for manual preliminary annotation.

Semi-Supervised Gland Segmentation via Feature-Enhanced Contrastive Learning and Dual-Consistency Strategy.

Yu J, Li B, Pan X, Shi Z, Wang H, Lan R, Luo X

pubmed logopapersJun 1 2025
In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.

The Pivotal Role of Baseline LDCT for Lung Cancer Screening in the Era of Artificial Intelligence.

De Luca GR, Diciotti S, Mascalchi M

pubmed logopapersJun 1 2025
In this narrative review, we address the ongoing challenges of lung cancer (LC) screening using chest low-dose computerized tomography (LDCT) and explore the contributions of artificial intelligence (AI), in overcoming them. We focus on evaluating the initial (baseline) LDCT examination, which provides a wealth of information relevant to the screening participant's health. This includes the detection of large-size prevalent LC and small-size malignant nodules that are typically diagnosed as LCs upon growth in subsequent annual LDCT scans. Additionally, the baseline LDCT examination provides valuable information about smoking-related comorbidities, including cardiovascular disease, chronic obstructive pulmonary disease, and interstitial lung disease (ILD), by identifying relevant markers. Notably, these comorbidities, despite the slow progression of their markers, collectively exceed LC as ultimate causes of death at follow-up in LC screening participants. Computer-assisted diagnosis tools currently improve the reproducibility of radiologic readings and reduce the false negative rate of LDCT. Deep learning (DL) tools that analyze the radiomic features of lung nodules are being developed to distinguish between benign and malignant nodules. Furthermore, AI tools can predict the risk of LC in the years following a baseline LDCT. AI tools that analyze baseline LDCT examinations can also compute the risk of cardiovascular disease or death, paving the way for personalized screening interventions. Additionally, DL tools are available for assessing osteoporosis and ILD, which helps refine the individual's current and future health profile. The primary obstacles to AI integration into the LDCT screening pathway are the generalizability of performance and the explainability.

Adaptive Weighting Based Metal Artifact Reduction in CT Images.

Wang H, Wu Y, Wang Y, Wei D, Wu X, Ma J, Zheng Y

pubmed logopapersJun 1 2025
Against the metal artifact reduction (MAR) task in computed tomography (CT) imaging, most of the existing deep-learning-based approaches generally select a single Hounsfield unit (HU) window followed by a normalization operation to preprocess CT images. However, in practical clinical scenarios, different body tissues and organs are often inspected under varying window settings for good contrast. The methods trained on a fixed single window would lead to insufficient removal of metal artifacts when being transferred to deal with other windows. To alleviate this problem, few works have proposed to reconstruct the CT images under multiple-window configurations. Albeit achieving good reconstruction performance for different windows, they adopt to directly supervise each window learning in an equal weighting way based on the training set. To improve the learning flexibility and model generalizability, in this paper, we propose an adaptive weighting algorithm, called AdaW, for the multiple-window metal artifact reduction, which can be applied to different deep MAR network backbones. Specifically, we first formulate the multiple window learning task as a bi-level optimization problem. Then we derive an adaptive weighting optimization algorithm where the learning process for MAR under each window is automatically weighted via a learning-to-learn paradigm based on the training set and validation set. This rationality is finely substantiated through theoretical analysis. Based on different network backbones, experimental comparisons executed on five datasets with different body sites comprehensively validate the effectiveness of AdaW in helping improve the generalization performance as well as its good applicability. We will release the code at https://github.com/hongwang01/AdaW.
Page 294 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.