Sort by:
Page 295 of 3343333 results

Effectiveness of Artificial Intelligence in detecting sinonasal pathology using clinical imaging modalities: a systematic review.

Petsiou DP, Spinos D, Martinos A, Muzaffar J, Garas G, Georgalas C

pubmed logopapersMay 19 2025
Sinonasal pathology can be complex and requires a systematic and meticulous approach. Artificial Intelligence (AI) has the potential to improve diagnostic accuracy and efficiency in sinonasal imaging, but its clinical applicability remains an area of ongoing research. This systematic review evaluates the methodologies and clinical relevance of AI in detecting sinonasal pathology through radiological imaging. Key search terms included "artificial intelligence," "deep learning," "machine learning," "neural network," and "paranasal sinuses,". Abstract and full-text screening was conducted using predefined inclusion and exclusion criteria. Data were extracted on study design, AI architectures used (e.g., Convolutional Neural Networks (CNN), Machine Learning classifiers), and clinical characteristics, such as imaging modality (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)). A total of 53 studies were analyzed, with 85% retrospective, 68% single-center, and 92.5% using internal databases. CT was the most common imaging modality (60.4%), and chronic rhinosinusitis without nasal polyposis (CRSsNP) was the most studied condition (34.0%). Forty-one studies employed neural networks, with classification as the most frequent AI task (35.8%). Key performance metrics included Area Under the Curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Quality assessment based on CONSORT-AI yielded a mean score of 16.0 ± 2. AI shows promise in improving sinonasal imaging interpretation. However, as existing research is predominantly retrospective and single-center, further studies are needed to evaluate AI's generalizability and applicability. More research is also required to explore AI's role in treatment planning and post-treatment prediction for clinical integration.

Effect of low-dose colchicine on pericoronary inflammation and coronary plaque composition in chronic coronary disease: a subanalysis of the LoDoCo2 trial.

Fiolet ATL, Lin A, Kwiecinski J, Tutein Nolthenius J, McElhinney P, Grodecki K, Kietselaer B, Opstal TS, Cornel JH, Knol RJ, Schaap J, Aarts RAHM, Tutein Nolthenius AMFA, Nidorf SM, Velthuis BK, Dey D, Mosterd A

pubmed logopapersMay 19 2025
Low-dose colchicine (0.5 mg once daily) reduces the risk of major cardiovascular events in coronary disease, but its mechanism of action is not yet fully understood. We investigated whether low-dose colchicine is associated with changes in pericoronary inflammation and plaque composition in patients with chronic coronary disease. We performed a cross-sectional, nationwide, subanalysis of the Low-Dose Colchicine 2 Trial (LoDoCo2, n=5522). CT angiography studies were performed in 151 participants randomised to colchicine or placebo coronary after a median treatment duration of 28.2 months. Pericoronary adipose tissue (PCAT) attenuation measurements around proximal coronary artery segments and quantitative plaque analysis for the entire coronary tree were performed using artificial intelligence-enabled plaque analysis software. Median PCAT attenuation was not significantly different between the two groups (-79.5 Hounsfield units (HU) for colchicine versus -78.7 HU for placebo, p=0.236). Participants assigned to colchicine had a higher volume (169.6 mm<sup>3</sup> vs 113.1 mm<sup>3</sup>, p=0.041) and burden (9.6% vs 7.0%, p=0.035) of calcified plaque, and higher volume of dense calcified plaque (192.8 mm<sup>3</sup> vs 144.3 mm<sup>3</sup>, p=0.048) compared with placebo, independent of statin therapy. Colchicine treatment was associated with a lower burden of low-attenuation plaque in participants on a low-intensity statin, but not in those on a high-intensity statin (p<sub>interaction</sub>=0.037). Pericoronary inflammation did not differ among participants who received low-dose colchicine compared with placebo. Low-dose colchicine was associated with a higher volume of calcified plaque, particularly dense calcified plaque, which is considered a feature of plaque stability.

Functional MRI Analysis of Cortical Regions to Distinguish Lewy Body Dementia From Alzheimer's Disease.

Kashyap B, Hanson LR, Gustafson SK, Sherman SJ, Sughrue ME, Rosenbloom MH

pubmed logopapersMay 19 2025
Cortical regions such as parietal area H (PH) and the fundus of the superior temporal sulcus (FST) are involved in higher visual function and may play a role in dementia with Lewy bodies (DLB), which is frequently associated with hallucinations. The authors evaluated functional connectivity between these two regions for distinguishing participants with DLB from those with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and from cognitively normal (CN) individuals to identify a functional connectivity MRI signature for DLB. Eighteen DLB participants completed cognitive testing and functional MRI scans and were matched to AD or MCI and CN individuals whose data were obtained from the Alzheimer's Disease Neuroimaging Initiative database (https://adni.loni.usc.edu). Images were analyzed with data from Human Connectome Project (HCP) comparison individuals by using a machine learning-based subject-specific HCP atlas based on diffusion tractography. Bihemispheric functional connectivity of the PH to left FST regions was reduced in the DLB group compared with the AD and CN groups (mean±SD connectivity score=0.307±0.009 vs. 0.456±0.006 and 0.433±0.006, respectively). No significant differences were detected among the groups in connectivity within basal ganglia structures, and no significant correlations were observed between neuropsychological testing results and functional connectivity between the PH and FST regions. Performances on clock-drawing and number-cancelation tests were significantly and negatively correlated with connectivity between the right caudate nucleus and right substantia nigra for DLB participants but not for AD or CN participants. The functional connectivity between PH and FST regions is uniquely affected by DLB and may help distinguish this condition from AD.

Segmentation of temporomandibular joint structures on mri images using neural networks for diagnosis of pathologies

Maksim I. Ivanov, Olga E. Mendybaeva, Yuri E. Karyakin, Igor N. Glukhikh, Aleksey V. Lebedev

arxiv logopreprintMay 19 2025
This article explores the use of artificial intelligence for the diagnosis of pathologies of the temporomandibular joint (TMJ), in particular, for the segmentation of the articular disc on MRI images. The relevance of the work is due to the high prevalence of TMJ pathologies, as well as the need to improve the accuracy and speed of diagnosis in medical institutions. During the study, the existing solutions (Diagnocat, MandSeg) were analyzed, which, as a result, are not suitable for studying the articular disc due to the orientation towards bone structures. To solve the problem, an original dataset was collected from 94 images with the classes "temporomandibular joint" and "jaw". To increase the amount of data, augmentation methods were used. After that, the models of U-Net, YOLOv8n, YOLOv11n and Roboflow neural networks were trained and compared. The evaluation was carried out according to the Dice Score, Precision, Sensitivity, Specificity, and Mean Average Precision metrics. The results confirm the potential of using the Roboflow model for segmentation of the temporomandibular joint. In the future, it is planned to develop an algorithm for measuring the distance between the jaws and determining the position of the articular disc, which will improve the diagnosis of TMJ pathologies.

Improving Deep Learning-Based Grading of Partial-thickness Supraspinatus Tendon Tears with Guided Diffusion Augmentation.

Ni M, Jiesisibieke D, Zhao Y, Wang Q, Gao L, Tian C, Yuan H

pubmed logopapersMay 19 2025
To develop and validate a deep learning system with guided diffusion-based data augmentation for grading partial-thickness supraspinatus tendon (SST) tears and to compare its performance with experienced radiologists, including external validation. This retrospective study included 1150 patients with arthroscopically confirmed SST tears, divided into a training set (741 patients), validation set (185 patients), and internal test set (185 patients). An independent external test set of 224 patients was used for generalizability assessment. To address data imbalance, MRI images were augmented using a guided diffusion model. A ResNet-34 model was employed for Ellman grading of bursal-sided and articular-sided partial-thickness tears across different MRI sequences (oblique coronal [OCOR], oblique sagittal [OSAG], and combined OCOR+OSAG). Performance was evaluated using AUC and precision-recall curves, and compared to three experienced musculoskeletal (MSK) radiologists. The DeLong test was used to compare performance across different sequence combinations. A total of 26,020 OCOR images and 26,356 OSAG images were generated using the guided diffusion model. For bursal-sided partial-thickness tears in the internal dataset, the model achieved AUCs of 0.99, 0.98, and 0.97 for OCOR, OSAG, and combined sequences, respectively, while for articular-sided tears, AUCs were 0.99, 0.99, and 0.99. The DeLong test showed no significant differences among sequence combinations (P=0.17, 0.14, 0.07). In the external dataset, the combined-sequence model achieved AUCs of 0.99, 0.97, and 0.97 for bursal-sided tears and 0.99, 0.95, and 0.95 for articular-sided tears. Radiologists demonstrated an ICC of 0.99, but their grading performance was significantly lower than the ResNet-34 model (P<0.001). The deep learning system improved grading consistency and significantly reduced evaluation time, while guided diffusion augmentation enhanced model robustness. The proposed deep learning system provides a reliable and efficient method for grading partial-thickness SST tears, achieving radiologist-level accuracy with greater consistency and faster evaluation speed.

Current trends and emerging themes in utilizing artificial intelligence to enhance anatomical diagnostic accuracy and efficiency in radiotherapy.

Pezzino S, Luca T, Castorina M, Puleo S, Castorina S

pubmed logopapersMay 19 2025
Artificial intelligence (AI) incorporation into healthcare has proven revolutionary, especially in radiotherapy, where accuracy is critical. The purpose of the study is to present patterns and develop topics in the application of AI to improve the precision of anatomical diagnosis, delineation of organs, and therapeutic effectiveness in radiation and radiological imaging. We performed a bibliometric analysis of scholarly articles in the fields starting in 2014. Through an examination of research output from key contributing nations and institutions, an analysis of notable research subjects, and an investigation of trends in scientific terminology pertaining to AI in radiology and radiotherapy. Furthermore, we examined software solutions based on AI in these domains, with a specific emphasis on extracting anatomical features and recognizing organs for the purpose of treatment planning. Our investigation found a significant surge in papers pertaining to AI in the fields since 2014. Institutions such as Emory University and Memorial Sloan-Kettering Cancer Center made substantial contributions to the development of the United States and China as leading research-producing nations. Key study areas encompassed adaptive radiation informed by anatomical alterations, MR-Linac for enhanced vision of soft tissues, and multi-organ segmentation for accurate planning of radiotherapy. An evident increase in the frequency of phrases such as 'radiomics,' 'radiotherapy segmentation,' and 'dosiomics' was noted. The evaluation of AI-based software revealed a wide range of uses in several subdisciplinary fields of radiation and radiology, particularly in improving the identification of anatomical features for treatment planning and identifying organs at risk. The incorporation of AI in anatomical diagnosis in radiological imaging and radiotherapy is progressing rapidly, with substantial capacity to transform the precision of diagnoses and the effectiveness of treatment planning.

Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields in Efficient CNNs for Fair Medical Image Classification

Xiao Wu, Xiaoqing Zhang, Zunjie Xiao, Lingxi Hu, Risa Higashita, Jiang Liu

arxiv logopreprintMay 19 2025
Efficient convolutional neural network (CNN) architecture designs have attracted growing research interests. However, they usually apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: 1) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on results, especially imbalanced medical image classification. 2) The predictions generated by those CNNs are often unfair/biased, bringing a high risk by employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bags to capture different lesion characteristics effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expert-like structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our method maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The codes of this paper will be released soon.

Feasibility study of a general model for synthetic CT generation in MRI-guided extracranial radiotherapy.

Hsu SH, Han Z, Hu YH, Ferguson D, van Dams R, Mak RH, Leeman JE, Sudhyadhom A

pubmed logopapersMay 19 2025
This study aims to investigate the feasibility of a single general model to synthesize CT images across body sites, thorax, abdomen, and pelvis, to support treatment planning for MRI-only radiotherapy. A total of 157 patients who received MRI-guided radiation therapy in the thorax, abdomen, and pelvis on a 0.35T MRIdian Linac were included. A subset of 122 cases were used for model training and the remaining 35 cases were used for model validation. All patient datasets had semi-paired CT-simulation image and 0.35T MR image acquired using TrueFISP. A conditional generative adversarial network with a multi-planar method was used to generate synthetic CT images from 0.35T MR images. The effect of preprocessing methods (with and without bias field corrections) on the quality of synthetic CT was evaluated and found to be insignificant. The general models trained on all cases performed comparably to the site-specific models trained on individual body sites. For all models, the peak signal-to-noise ratios ranged from 31.7 to 34.9 and the structural index similarity measures ranged from 0.9547 to 0.9758. For the datasets with bias field corrections, the mean-absolute-errors in HU (general model versus site-specific model) were 49.7 ± 9.4 versus 49.5 ± 8.9, 48.7 ± 7.6 versus 43 ± 7.8 and 32.8 ± 5.5 versus 31.8 ± 5.3 for the thorax, abdomen, and pelvis, respectively. When comparing plans between synthetic CTs and ground truth CTs, the dosimetric difference was on average less than 0.5% (0.2 Gy) for target coverage and less than 2.1% (0.4 Gy) for organ-at-risk metrics for all body sites with either the general or specific models. Synthetic CT plans showed good agreement with mean gamma pass rates of >94% and >99% for 1%/1 mm and 2%/2 mm, respectively. This study has demonstrated the feasibility of using a general model for multiple body sites and the potential of using synthetic CT to support an MRI-guided radiotherapy workflow.

A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation

Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy

arxiv logopreprintMay 19 2025
Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.

GuidedMorph: Two-Stage Deformable Registration for Breast MRI

Yaqian Chen, Hanxue Gu, Haoyu Dong, Qihang Li, Yuwen Chen, Nicholas Konz, Lin Li, Maciej A. Mazurowski

arxiv logopreprintMay 19 2025
Accurately registering breast MR images from different time points enables the alignment of anatomical structures and tracking of tumor progression, supporting more effective breast cancer detection, diagnosis, and treatment planning. However, the complexity of dense tissue and its highly non-rigid nature pose challenges for conventional registration methods, which primarily focus on aligning general structures while overlooking intricate internal details. To address this, we propose \textbf{GuidedMorph}, a novel two-stage registration framework designed to better align dense tissue. In addition to a single-scale network for global structure alignment, we introduce a framework that utilizes dense tissue information to track breast movement. The learned transformation fields are fused by introducing the Dual Spatial Transformer Network (DSTN), improving overall alignment accuracy. A novel warping method based on the Euclidean distance transform (EDT) is also proposed to accurately warp the registered dense tissue and breast masks, preserving fine structural details during deformation. The framework supports paradigms that require external segmentation models and with image data only. It also operates effectively with the VoxelMorph and TransMorph backbones, offering a versatile solution for breast registration. We validate our method on ISPY2 and internal dataset, demonstrating superior performance in dense tissue, overall breast alignment, and breast structural similarity index measure (SSIM), with notable improvements by over 13.01% in dense tissue Dice, 3.13% in breast Dice, and 1.21% in breast SSIM compared to the best learning-based baseline.
Page 295 of 3343333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.