Sort by:
Page 170 of 3973969 results

Impact of polymer source variations on hydrogel structure and product performance in dexamethasone-loaded ophthalmic inserts.

VandenBerg MA, Zaman RU, Plavchak CL, Smith WC, Nejad HB, Beringhs AO, Wang Y, Xu X

pubmed logopapersJul 9 2025
Localized drug delivery can enhance therapeutic efficacy while minimizing systemic side effects, making sustained-release ophthalmic inserts an attractive alternative to traditional eye drops. Such inserts offer improved patient compliance through prolonged therapeutic effects and a reduced need for frequent administration. This study focuses on dexamethasone-containing ophthalmic inserts. These inserts utilize a key excipient, polyethylene glycol (PEG), which forms a hydrogel upon contact with tear fluid. Developing generic equivalents of PEG-based inserts is challenging due to difficulties in characterizing inactive ingredients and the absence of standardized physicochemical characterization methods to demonstrate similarity. To address this gap, a suite of analytical approaches was applied to both PEG precursor materials sourced from different vendors and manufactured inserts. <sup>1</sup>H NMR, FTIR, MALDI, and SEC revealed variations in end-group functionalization, impurity content, and molecular weight distribution of the excipient. These differences led to changes in the finished insert network properties such as porosity, pore size and structure, gel mechanical strength, and crystallinity, which were corroborated by X-ray microscopy, AI-based image analysis, thermal, mechanical, and density measurements. In vitro release testing revealed distinct drug release profiles across formulations, with swelling rate correlated to release rate (i.e., faster release with rapid swelling). The use of non-micronized and micronized dexamethasone also contributed to release profile differences. Through comprehensive characterization of these PEG-based dexamethasone inserts, correlations between polymer quality, hydrogel microstructure, and release kinetics were established. The study highlights how excipient differences can alter product performance, emphasizing the importance of thorough analysis in developing generic equivalents of complex drug products.

CTV-MIND: A cortical thickness-volume integrated individualized morphological network model to explore disease progression in temporal lobe epilepsy.

Liu X, Han J, Zhang X, Wei B, Xu L, Zhou Q, Wang Y, Lin Y, Zhang J

pubmed logopapersJul 9 2025
Temporal lobe epilepsy (TLE) is a progressive brain network disorder. Elucidating network reorganization and identifying disease progression-associated biomarkers are crucial for understanding pathological mechanisms, quantifying disease burden, and optimizing clinical strategies. This study aimed to investigate progressive changes in TLE by constructing a novel individualized morphological brain network based on T1-weighted structural magnetic resonance imaging (MRI). MRI data were collected from 34 postoperative seizure-free TLE patients and 28 age- and sex-matched healthy controls (HC), with patients divided into LONG-TERM and SHORT-TERM groups. Individualized morphological networks were constructed using the Morphometric INverse Divergence (MIND) framework by integrating cortical thickness and volume features (CTV-MIND). Network properties were then calculated and compared across groups to identify features potentially associated with disease progression. Results revealed progressive hub-node reorganization in CTV-MIND networks, with the LONG-TERM group showing increased connectivity in the lesion-side temporal lobe compared to SHORT-TERM and HC groups. The altered network node properties showed a significant correlation with local cortical atrophy. Incorporating identified network features into a machine learning-based brain age prediction model further revealed significantly elevated brain age in TLE. Notably, duration-related brain regions exerted a more significant and specific impact on premature brain aging in TLE than other regional combinations. Thus, prolonged duration may serve as an important contributor to the pathological aging observed in TLE. Our findings could help clinicians better identify abnormal brain trajectories in TLE and have the potential to facilitate the optimization of personalized treatment strategies.

A novel segmentation-based deep learning model for enhanced scaphoid fracture detection.

Bützow A, Anttila TT, Haapamäki V, Ryhänen J

pubmed logopapersJul 9 2025
To develop a deep learning model to detect apparent and occult scaphoid fractures from plain wrist radiographs and to compare the model's diagnostic performance with that of a group of experts. A dataset comprising 408 patients, 410 wrists, and 1011 radiographs was collected. 718 of these radiographs contained a scaphoid fracture, verified by magnetic resonance imaging or computed tomography scans. 58 of these fractures were occult. The images were divided into training, test, and occult fracture test sets. The images were annotated by marking the scaphoid bone and the possible fracture area. The performance of the developed DL model was compared with the ground truth and the assessments of three clinical experts. The DL model achieved a sensitivity of 0.86 (95 % CI: 0.75-0.93) and a specificity of 0.83 (0.64-0.94). The model's accuracy was 0.85 (0.76-0.92), and the area under the receiver operating characteristics curve was 0.92 (0.86-0.97). The clinical experts' sensitivity ranged from 0.77 to 0.89, and specificity from 0.83 to 0.97. The DL model detected 24 of 58 (41 %) occult fractures, compared to 10.3 %, 13.7 %, and 6.8 % by the clinical experts. Detecting scaphoid fractures using a segmentation-based DL model is feasible and comparable to previously developed DL models. The model performed similarly to a group of experts in identifying apparent scaphoid fractures and demonstrated higher diagnostic accuracy in detecting occult fractures. The improvement in occult fracture detection could enhance patient care.

Artificial intelligence in cardiac sarcoidosis: ECG, Echo, CPET and MRI.

Umeojiako WI, Lüscher T, Sharma R

pubmed logopapersJul 8 2025
Cardiac sarcoidosis is a form of inflammatory cardiomyopathy that varies in its clinical presentation. It is associated with significant clinical complications such as high-degree atrioventricular block, ventricular tachycardia, heart failure and sudden cardiac death. It is challenging to diagnose clinically, and its increasing detection rate may represent increasing awareness of the disease by clinicians as well as a rising incidence. Prompt diagnosis and risk stratification reduces morbidity and mortality from cardiac sarcoidosis. Noninvasive diagnostic modalities such as ECG, echocardiography, PET/computed tomography (PET/CT) and cardiac MRI (cMRI) are increasingly playing important roles in cardiac sarcoidosis diagnosis. Artificial intelligence driven applications are increasingly being applied to these diagnostic modalities to improve the detection of cardiac sarcoidosis. Review of the recent literature suggests artificial intelligence based algorithms in PET/CT and cMRIs can predict cardiac sarcoidosis as accurately as trained experts, however, there are few published studies on artificial intelligence based algorithms in ECG and echocardiography. The impressive advances in artificial intelligence have the potential to transform patient screening in cardiac sarcoidosis, aid prompt diagnosis and appropriate risk stratification and change clinical practice.

Vision Transformers-Based Deep Feature Generation Framework for Hydatid Cyst Classification in Computed Tomography Images.

Sagik M, Gumus A

pubmed logopapersJul 8 2025
Hydatid cysts, caused by Echinococcus granulosus, form progressively enlarging fluid-filled cysts in organs like the liver and lungs, posing significant public health risks through severe complications or death. This study presents a novel deep feature generation framework utilizing vision transformer models (ViT-DFG) to enhance the classification accuracy of hydatid cyst types. The proposed framework consists of four phases: image preprocessing, feature extraction using vision transformer models, feature selection through iterative neighborhood component analysis, and classification, where the performance of the ViT-DFG model was evaluated and compared across different classifiers such as k-nearest neighbor and multi-layer perceptron (MLP). Both methods were evaluated independently to assess classification performance from different approaches. The dataset, comprising five cyst types, was analyzed for both five-class and three-class classification by grouping the cyst types into active, transition, and inactive categories. Experimental results showed that the proposed VIT-DFG method achieves higher accuracy than existing methods. Specifically, the ViT-DFG framework attained an overall classification accuracy of 98.10% for the three-class and 95.12% for the five-class classifications using 5-fold cross-validation. Statistical analysis through one-way analysis of variance (ANOVA), conducted to evaluate significant differences between models, confirmed significant differences between the proposed framework and individual vision transformer models ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). These results highlight the effectiveness of combining multiple vision transformer architectures with advanced feature selection techniques in improving classification performance. The findings underscore the ViT-DFG framework's potential to advance medical image analysis, particularly in hydatid cyst classification, while offering clinical promise through automated diagnostics and improved decision-making.

Development of a deep learning model for predicting skeletal muscle density from ultrasound data: a proof-of-concept study.

Pistoia F, Macciò M, Picasso R, Zaottini F, Marcenaro G, Rinaldi S, Bianco D, Rossi G, Tovt L, Pansecchi M, Sanguinetti S, Hamedani M, Schenone A, Martinoli C

pubmed logopapersJul 8 2025
Reduced muscle mass and function are associated with increased morbidity, and mortality. Ultrasound, despite being cost-effective and portable, is still underutilized in muscle trophism assessment due to its reliance on operator expertise and measurement variability. This proof-of-concept study aimed to overcome these limitations by developing a deep learning model that predicts muscle density, as assessed by CT, using Ultrasound data, exploring the feasibility of a novel Ultrasound-based parameter for muscle trophism.A sample of adult participants undergoing CT examination in our institution's emergency department between May 2022 and March 2023 was enrolled in this single-center study. Ultrasound examinations were performed with a L11-3 MHz probe. The rectus abdominis muscles, selected as target muscles, were scanned in the transverse plane, recording an Ultrasound image per side. For each participant, the same operator calculated the average target muscle density in Hounsfield Units from an axial CT slice closely matching the Ultrasound scanning plane.The final dataset included 1090 Ultrasound images from 551 participants (mean age 67 ± 17, 323 males). A deep learning model was developed to classify Ultrasound images into three muscle-density classes based on CT values. The model achieved promising performance, with a categorical accuracy of 70% and AUC values of 0.89, 0.79, and 0.90 across the three classes.This observational study introduces an innovative approach to automated muscle trophism assessment using Ultrasound imaging. Future efforts should focus on external validation in diverse populations and clinical settings, as well as expanding its application to other muscles.

MTMedFormer: multi-task vision transformer for medical imaging with federated learning.

Nath A, Shukla S, Gupta P

pubmed logopapersJul 8 2025
Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers' ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation.

Deep learning 3D super-resolution radiomics model based on Gd-enhanced MRI for improving preoperative prediction of HCC pathological grading.

Jia F, Wu B, Wang Z, Jiang J, Liu J, Liu Y, Zhou Y, Zhao X, Yang W, Xiong Y, Jiang Y, Zhang J

pubmed logopapersJul 8 2025
The histological grade of hepatocellular carcinoma (HCC) is an important factor associated with early tumor recurrence and prognosis after surgery. Developing a valuable tool to assess this grade is essential for treatment. This study aimed to evaluate the feasibility and efficacy of a deep learning-based three-dimensional super-resolution (SR) magnetic resonance imaging radiomics model for predicting the pathological grade of HCC. A total of 197 HCC patients were included and divided into a training cohort (n = 157) and a testing cohort (n = 40). Three-dimensional SR technology based on deep learning was used to obtain SR hepatobiliary phase (HBP) images from normal-resolution (NR) HBP images. High-dimensional quantitative features were extracted from manually segmented volumes of interest in NRHBP and SRHBP images. The gradient boosting, light gradient boosting machine, and support vector machine were used to develop three-class (well-differentiated vs. moderately differentiated vs. poorly differentiated) and binary radiomics (well-differentiated vs. moderately and poorly differentiated) models, and the predictive performance of these models was evaluated using several measures. All the three-class models using SRHBP images had higher area under the curve (AUC) values than those using NRHBP images. The binary classification models developed with SRHBP images also outperformed those with NRHBP images in distinguishing moderately and poorly differentiated HCC from well-differentiated HCC (AUC = 0.849, sensitivity = 77.8%, specificity = 76.9%, accuracy = 77.5% vs. AUC = 0.603, sensitivity = 48.1%, specificity = 76.9%, accuracy = 57.5%; p = 0.039). Decision curve analysis revealed the clinical value of the models. Deep learning-based three-dimensional SR technology may improve the performance of radiomics models using HBP images for predicting the preoperative pathological grade of HCC.

A fully automated deep learning framework for age estimation in adults using periapical radiographs of canine teeth.

Upalananda W, Phisutphithayakun C, Assawasuksant P, Tanwattana P, Prasatkaew P

pubmed logopapersJul 8 2025
Determining age from dental remains is vital in forensic investigations, aiding in victim identification and anthropological research. Our framework uses a two-step pipeline: tooth detection followed by age estimation, based on either canine tooth images alone or combined with sex information. The dataset included 2,587 radiographs from 1,004 patients (691 females, 313 males) aged 13.42-85.45 years. The YOLOv8-Nano model achieved exceptional performance in detecting canine teeth, with an F1 score of 0.994, a 98.94% detection success rate, and accurate numbering of all detected teeth. For age estimation, we implemented four convolutional neural network architectures: ResNet-18, DenseNet-121, EfficientNet-B0, and MobileNetV3. Each model was trained to estimate age based on one of the four individual canine teeth (13, 23, 33, and 43). The models achieved median absolute errors ranging from 3.55 to 5.18 years. Incorporating sex as an additional input feature did not improve performance. Moreover, no significant differences in predictive accuracy were observed among the individual teeth. In conclusion, the proposed framework demonstrates potential as a robust and practical tool for forensic age estimation across diverse forensic contexts.

A novel UNet-SegNet and vision transformer architectures for efficient segmentation and classification in medical imaging.

Tongbram S, Shimray BA, Singh LS

pubmed logopapersJul 8 2025
Medical imaging has become an essential tool in the diagnosis and treatment of various diseases, and provides critical insights through ultrasound, MRI, and X-ray modalities. Despite its importance, challenges remain in the accurate segmentation and classification of complex structures owing to factors such as low contrast, noise, and irregular anatomical shapes. This study addresses these challenges by proposing a novel hybrid deep learning model that integrates the strengths of Convolutional Autoencoders (CAE), UNet, and SegNet architectures. In the preprocessing phase, a Convolutional Autoencoder is used to effectively reduce noise while preserving essential image details, ensuring that the images used for segmentation and classification are of high quality. The ability of CAE to denoise images while retaining critical features enhances the accuracy of the subsequent analysis. The developed model employs UNet for multiscale feature extraction and SegNet for precise boundary reconstruction, with Dynamic Feature Fusion integrated at each skip connection to dynamically weight and combine the feature maps from the encoder and decoder. This ensures that both global and local features are effectively captured, while emphasizing the critical regions for segmentation. To further enhance the model's performance, the Hybrid Emperor Penguin Optimizer (HEPO) was employed for feature selection, while the Hybrid Vision Transformer with Convolutional Embedding (HyViT-CE) was used for the classification task. This hybrid approach allows the model to maintain high accuracy across different medical imaging tasks. The model was evaluated using three major datasets: brain tumor MRI, breast ultrasound, and chest X-rays. The results demonstrate exceptional performance, achieving an accuracy of 99.92% for brain tumor segmentation, 99.67% for breast cancer detection, and 99.93% for chest X-ray classification. These outcomes highlight the ability of the model to deliver reliable and accurate diagnostics across various medical contexts, underscoring its potential as a valuable tool in clinical settings. The findings of this study will contribute to advancing deep learning applications in medical imaging, addressing existing research gaps, and offering a robust solution for improved patient care.
Page 170 of 3973969 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.