Sort by:
Page 42 of 1411405 results

Scale-Aware Super-Resolution Network With Dual Affinity Learning for Lesion Segmentation From Medical Images.

Luo L, Li Y, Chai Z, Lin H, Heng PA, Chen H

pubmed logopapersJun 1 2025
Convolutional neural networks (CNNs) have shown remarkable progress in medical image segmentation. However, the lesion segmentation remains a challenge to state-of-the-art CNN-based algorithms due to the variance in scales and shapes. On the one hand, tiny lesions are hard to delineate precisely from the medical images which are often of low resolutions. On the other hand, segmenting large-size lesions requires large receptive fields, which exacerbates the first challenge. In this article, we present a scale-aware super-resolution (SR) network to adaptively segment lesions of various sizes from low-resolution (LR) medical images. Our proposed network contains dual branches to simultaneously conduct lesion mask SR (LMSR) and lesion image SR (LISR). Meanwhile, we introduce scale-aware dilated convolution (SDC) blocks into the multitask decoders to adaptively adjust the receptive fields of the convolutional kernels according to the lesion sizes. To guide the segmentation branch to learn from richer high-resolution (HR) features, we propose a feature affinity (FA) module and a scale affinity (SA) module to enhance the multitask learning of the dual branches. On multiple challenging lesion segmentation datasets, our proposed network achieved consistent improvements compared with other state-of-the-art methods. Code will be available at: https://github.com/poiuohke/SASR_Net.

Classification of differentially activated groups of fibroblasts using morphodynamic and motile features.

Kang M, Min C, Devarasou S, Shin JH

pubmed logopapersJun 1 2025
Fibroblasts play essential roles in cancer progression, exhibiting activation states that can either promote or inhibit tumor growth. Understanding these differential activation states is critical for targeting the tumor microenvironment (TME) in cancer therapy. However, traditional molecular markers used to identify cancer-associated fibroblasts are limited by their co-expression across multiple fibroblast subtypes, making it difficult to distinguish specific activation states. Morphological and motility characteristics of fibroblasts reflect their underlying gene expression patterns and activation states, making these features valuable descriptors of fibroblast behavior. This study proposes an artificial intelligence-based classification framework to identify and characterize differentially activated fibroblasts by analyzing their morphodynamic and motile features. We extract these features from label-free live-cell imaging data of fibroblasts co-cultured with breast cancer cell lines using deep learning and machine learning algorithms. Our findings show that morphodynamic and motile features offer robust insights into fibroblast activation states, complementing molecular markers and overcoming their limitations. This biophysical state-based cellular classification framework provides a novel, comprehensive approach for characterizing fibroblast activation, with significant potential for advancing our understanding of the TME and informing targeted cancer therapies.

Neuroimaging and machine learning in eating disorders: a systematic review.

Monaco F, Vignapiano A, Di Gruttola B, Landi S, Panarello E, Malvone R, Palermo S, Marenna A, Collantoni E, Celia G, Di Stefano V, Meneguzzo P, D'Angelo M, Corrivetti G, Steardo L

pubmed logopapersJun 1 2025
Eating disorders (EDs), including anorexia nervosa (AN), bulimia nervosa (BN), and binge eating disorder (BED), are complex psychiatric conditions with high morbidity and mortality. Neuroimaging and machine learning (ML) represent promising approaches to improve diagnosis, understand pathophysiological mechanisms, and predict treatment response. This systematic review aimed to evaluate the application of ML techniques to neuroimaging data in EDs. Following PRISMA guidelines (PROSPERO registration: CRD42024628157), we systematically searched PubMed and APA PsycINFO for studies published between 2014 and 2024. Inclusion criteria encompassed human studies using neuroimaging and ML methods applied to AN, BN, or BED. Data extraction focused on study design, imaging modalities, ML techniques, and performance metrics. Quality was assessed using the GRADE framework and the ROBINS-I tool. Out of 185 records screened, 5 studies met the inclusion criteria. Most applied support vector machines (SVMs) or other supervised ML models to structural MRI or diffusion tensor imaging data. Cortical thickness alterations in AN and diffusion-based metrics effectively distinguished ED subtypes. However, all studies were observational, heterogeneous, and at moderate to serious risk of bias. Sample sizes were small, and external validation was lacking. ML applied to neuroimaging shows potential for improving ED characterization and outcome prediction. Nevertheless, methodological limitations restrict generalizability. Future research should focus on larger, multicenter, and multimodal studies to enhance clinical applicability. Level IV, multiple observational studies with methodological heterogeneity and moderate to serious risk of bias.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

GAN Inversion for Data Augmentation to Improve Colonoscopy Lesion Classification.

Golhar MV, Bobrow TL, Ngamruengphong S, Durr NJ

pubmed logopapersJun 1 2025
A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

Multimodal Neuroimaging Based Alzheimer's Disease Diagnosis Using Evolutionary RVFL Classifier.

Goel T, Sharma R, Tanveer M, Suganthan PN, Maji K, Pilli R

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is one of the most known causes of dementia which can be characterized by continuous deterioration in the cognitive skills of elderly people. It is a non-reversible disorder that can only be cured if detected early, which is known as mild cognitive impairment (MCI). The most common biomarkers to diagnose AD are structural atrophy and accumulation of plaques and tangles, which can be detected using magnetic resonance imaging (MRI) and positron emission tomography (PET) scans. Therefore, the present paper proposes wavelet transform-based multimodality fusion of MRI and PET scans to incorporate structural and metabolic information for the early detection of this life-taking neurodegenerative disease. Further, the deep learning model, ResNet-50, extracts the fused images' features. The random vector functional link (RVFL) with only one hidden layer is used to classify the extracted features. The weights and biases of the original RVFL network are being optimized by using an evolutionary algorithm to get optimum accuracy. All the experiments and comparisons are performed over the publicly available Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to demonstrate the suggested algorithm's efficacy.

[Capabilities and Advances of Transrectal Ultrasound in 2025].

Kaufmann S, Kruck S

pubmed logopapersJun 1 2025
Transrectal ultrasound, particularly in the combination of high-frequency ultrasound and MR-TRUS fusion technologies, provides a highly precise and effective method for correlation and targeted biopsy of suspicious intraprostatic lesions detected by MRI. Advances in imaging technology, driven by 29 Mhz micro-ultrasound transducers, robotic-assisted systems, and the integration of AI-based analyses, promise further improvements in diagnostic accuracy and a reduction in unnecessary biopsies. Further technological advancements and improved TRUS training could contribute to a decentralized and cost-effective diagnostic evaluation of prostate cancer in the future.

Deep Learning in Knee MRI: A Prospective Study to Enhance Efficiency, Diagnostic Confidence and Sustainability.

Reschke P, Gotta J, Gruenewald LD, Bachir AA, Strecker R, Nickel D, Booz C, Martin SS, Scholtz JE, D'Angelo T, Dahm D, Solim LA, Konrad P, Mahmoudi S, Bernatz S, Al-Saleh S, Hong QAL, Sommer CM, Eichler K, Vogl TJ, Haberkorn SM, Koch V

pubmed logopapersJun 1 2025
The objective of this study was to evaluate a combination of deep learning (DL)-reconstructed parallel acquisition technique (PAT) and simultaneous multislice (SMS) acceleration imaging in comparison to conventional knee imaging. Adults undergoing knee magnetic resonance imaging (MRI) with DL-enhanced acquisitions were prospectively analyzed from December 2023 to April 2024. The participants received T1 without fat saturation and fat-suppressed PD-weighted TSE pulse sequences using conventional two-fold PAT (P2) and either DL-enhanced four-fold PAT (P4) or a combination of DL-enhanced four-fold PAT with two-fold SMS acceleration (P4S2). Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiomics features. 34 participants (mean age 45±17years; 14 women) were included who underwent P4S2, P4, and P2 imaging. Both P4S2 and P4 demonstrated higher CNR and SNR values compared to P2 (P<.001). P4 was diagnostically inferior to P2 only in the visualization of cartilage damage (P<.005), while P4S2 consistently outperformed P2 in anatomical delineation across all evaluated structures and raters (P<.05). Radiomics analysis revealed significant differences in contrast and gray-level characteristics among P2, P4, and P4S2 (P<.05). P4 reduced time by 31% and P4S2 by 41% compared to P2 (P<.05). P4S2 DL acceleration offers significant advancements over P4 and P2 in knee MRI, combining superior image quality and improved anatomical delineation at significant time reduction. Its improvements in anatomical delineation, energy consumption, and workforce optimization make P4S2 a significant step forward.

Expanded AI learning: AI as a Tool for Human Learning.

Faghani S, Tiegs-Heiden CA, Moassefi M, Powell GM, Ringler MD, Erickson BJ, Rhodes NG

pubmed logopapersJun 1 2025
To demonstrate that a deep learning (DL) model can be employed as a teaching tool to improve radiologists' ability to perform a subsequent imaging task without additional artificial intelligence (AI) assistance at time of image interpretation. Three human readers were tasked to categorize 50 frontal knee radiographs by male and female sex before and after reviewing data derived from our DL model. The model's high accuracy in performing this task was revealed to the human subjects, who were also supplied the DL model's resultant occlusion interpretation maps ("heat maps") to serve as a teaching tool for study before final testing. Two weeks later, the three human readers performed the same task with a new set of 50 radiographs. The average accuracy of the three human readers was initially 0.59 (95%CI: 0.59-0.65), not statistically different than guessing given our sample skew. The DL model categorized sex with 0.96 accuracy. After study of AI-derived "heat maps" and associated radiographs, the average accuracy of the human readers, without the direct help of AI, on the new set of radiographs increased to 0.80 (95%CI: 0.73-0.86), a significant improvement (p=0.0270). AI-derived data can be used as a teaching tool to improve radiologists' own ability to perform an imaging task. This is an idea that we have not before seen advanced in the radiology literature. AI can be used as a teaching tool to improve the intrinsic accuracy of radiologists, even without the concurrent use of AI.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.
Page 42 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.