Sort by:
Page 76 of 6346332 results

Sardanelli F, Scaperrotta G

pubmed logopapersOct 8 2025
Over the past decade, artificial intelligence (AI) has entered the medical field, particularly in radiology. Fifty years after the transition from "silver to silicon", we are experiencing a second digital revolution fueled by AI and radiomics. Breast imaging-and mammography in particular-plays a special role in this transformation due to the availability of large screening datasets. The advantages that AI tools bring to mammography interpretation are substantial: they can increase cancer detection rates by over 25% and reduce reading workload by more than 40%. In addition, AI can aid in quality control of mammograms and-importantly-in stratifying breast cancer risk, enabling personalized screening strategies. However, enhancing screening programs alone is not sufficient for BC prevention. True prevention aims to stop cancer before it starts. As defined by the Oxford Learner's Dictionary of Academic English, prevention is "the act of stopping something bad from happening." Screening represents secondary prevention and, on its own, is not enough. As physicians-and not merely image analysts-we must also focus on primary (true) prevention by promoting a healthier lifestyle, regular physical exercise, balanced diet and weight control, and smoking cessation. Pharmacological prevention should be considered when indicated, such as for women who were diagnosed with selected B3 lesions or have a personal history of breast cancer. Web-based tools and mobile apps-especially when combined with wearable devices-can support these efforts. AI has the potential to help both primary and secondary breast cancer prevention. Much research is expected in this area. KEY POINTS: AI can enhance screening mammography by increasing detection rates (by over 25%) and reducing the reading workload (by over 40%), as well as through deep learning-based risk stratification. Breast radiologists should prioritize primary (true) prevention by promoting healthy lifestyle choices, open to integrating web-based tools, mobile apps, and wearable devices. AI holds promise for both supporting primary and secondary breast cancer prevention, with radiologists playing a pivotal role in its implementation.

Abinaya B, Malleswaran M, Muthupriya V

pubmed logopapersOct 8 2025
Late gadolinium enhancement cardiac magnetic resonance (LGE-CMR) imaging plays a crucial role in assessing myocardial scar tissues, aiding in the diagnosis and prognosis of cardiovascular diseases. However, accurately classifying scar tissue severity into mild and severe remains a challenge due to low contrast, noise interference, and inter-patient variability in LGE-CMR images. Existing methods often rely on manual assessment or traditional deep learning models that struggle with precise myocardium localization and discriminative feature extraction from scarred regions. To overcome these challenges, we propose a novel framework incorporating ScarYOLO, an optimized YOLOv8-based myocardium detection model, followed by contrastive myocardial scar learning (CMSL) for severity classification. ScarYOLO enhances myocardium localization accuracy, ensuring precise detection of scarred tissue. The detected myocardium is then processed using CMSL, which employs a fine-tuned Xception-based encoder trained on a labeled LGE-CMR dataset. CMSL leverages contrastive self-supervised learning to enhance feature representation and improve class separability between mild and severe scar regions. Additional dense layers and a classification head are appended to the encoder for final severity prediction. The proposed approach enhances myocardial scar detection accuracy while improving robustness in low-contrast LGE-CMR images. By leveraging ScarYOLO for precise segmentation and CMSL for effective classification, our model outperforms conventional deep learning methods in classifying scar tissue severity. Experimental evaluations demonstrate significant improvements in detection precision, classification accuracy, and model generalization, making it a reliable tool for automated myocardial scar assessment in clinical settings.

Thakare B, Chaudhari B, Patil M, Kamble S

pubmed logopapersOct 8 2025
Computed Tomography (CT)has gained recognition as the leading imaging method, extensively used in the diagnosis of spinal cord injuries. The reliance on CT imaging for acute care in patients with Spinal Cord Injury (SCI) has expanded rapidly. However, the diagnosis of initial clinical injury is crucial to accurately predict functional prediction, which is a difficult task for both clinicians and radiologists. To conquer this issue, an efficient model based on SCI detection is proposed, named as Shepard Parallel Convolutional Forward Harmonic Net (ShPCFHNet). The first step involves improving the CT image by applying logarithmic transformations in the enhancement phase. Spinal cord segmentation is then performed with the aid of the proposed Dual-branch UNet, whose loss function is adapted using Sensitivity-Specificity Loss (SSL). Following this, disc localization is carried out using an active contour model, and feature extraction is subsequently performed. The final step involves detecting SCI using ShPCFHNet, which combines the Shepard Convolutional Neural Network (ShCNN) and Parallel Convolutional Neural Network (PCNN) with Harmonic analysis. The proposed model achieved performance metrics of 91.397% accuracy, 92.684% True Positive Rate (TPR), and 90.366% True Negative Rate (TNR).

Shiye Su, Yuhui Zhang, Linqi Zhou, Rajesh Ranganath, Serena Yeung-Levy

arxiv logopreprintOct 8 2025
Modeling transformations between arbitrary data distributions is a fundamental scientific challenge, arising in applications like drug discovery and evolutionary simulation. While flow matching offers a natural framework for this task, its use has thus far primarily focused on the noise-to-data setting, while its application in the general distribution-to-distribution setting is underexplored. We find that in the latter case, where the source is also a data distribution to be learned from limited samples, standard flow matching fails due to sparse supervision. To address this, we propose a simple and computationally efficient method that injects stochasticity into the training process by perturbing source samples and flow interpolants. On five diverse imaging tasks spanning biology, radiology, and astronomy, our method significantly improves generation quality, outperforming existing baselines by an average of 9 FID points. Our approach also reduces the transport cost between input and generated samples to better highlight the true effect of the transformation, making flow matching a more practical tool for simulating the diverse distribution transformations that arise in science.

Raju KG, S R

pubmed logopapersOct 8 2025
Cervical spine fractures present considerable challenges in both diagnosis and treatment. With the increasing incidence of such injuries and the limitations of conventional diagnostic tools, there is a pressing demand for more precise and effective detection methods. This study proposes a robust Multi-class Classification model for Cervical Spine Fractures (MC-CSF) using Computed Tomography (CT) images to enable the precise identification of fracture types. The process of MC-CSF starts with preprocessing input images using an Enhanced Wiener Filtering (EWF) technique to minimize noise while retaining critical structural features. Following this, a Modified Residual Block-assisted ResUNet (MRB-RUNet) model is utilized for segmentation to precisely isolate the cervical spine area. Once segmented, feature extraction combines both deep learning approaches and texture-based analysis, in which deep features are extracted from established models like VGG16 and Residual Network (ResNet), while Local Gabor Transitional Pattern (LGTrP) captures subtle local texture variations. These features are then processed by an ensemble of sophisticated classifiers, including Enhanced LeNet (E-LNet), ShuffleNet, and a deep convolutional neural network (DCNN), each tasked with distinguishing between different fracture types. To enhance overall classification accuracy, a soft voting approach is applied, where the probabilistic outputs of multiple classifiers are aggregated. This strategy leverages the complementary strengths of individual models, resulting in a more robust and reliable prediction of cervical spine fracture categories. The Ensemble model consistently outperforms the traditional approaches with peak accuracy of 0.954, precision of 0.813 and NPV of 0.974, respectively.

Teng Wang, Haojun Jiang, Yuxuan Wang, Zhenguo Sun, Shiji Song, Gao Huang

arxiv logopreprintOct 8 2025
Echocardiography is a critical tool for detecting heart diseases. Recently, ultrasound foundation models have demonstrated remarkable capabilities in cardiac ultrasound image analysis. However, obtaining high-quality ultrasound images is a prerequisite for accurate diagnosis. Due to the exceptionally high operational difficulty of cardiac ultrasound, there is a shortage of highly skilled personnel, which hinders patients from receiving timely examination services. In this paper, we aim to adapt the medical knowledge learned by foundation models from vast datasets to the probe guidance task, which is designed to provide real-time operational recommendations for junior sonographers to acquire high-quality ultrasound images. Moreover, inspired by the practice where experts optimize action decisions based on past explorations, we meticulously design a parameter-efficient Vision-Action Adapter (VA-Adapter) to enable foundation model's image encoder to encode vision-action sequences, thereby enhancing guidance performance. With built-in sequential reasoning capabilities in a compact design, the VA-Adapter enables a pre-trained ultrasound foundation model to learn precise probe adjustment strategies by fine-tuning only a small subset of parameters. Extensive experiments demonstrate that the VA-Adapter can surpass strong probe guidance models. Our code will be released after acceptance.

Jan Fiszer, Dominika Ciupek, Maciej Malawski

arxiv logopreprintOct 8 2025
Deep learning (DL) has been increasingly applied in medical imaging, however, it requires large amounts of data, which raises many challenges related to data privacy, storage, and transfer. Federated learning (FL) is a training paradigm that overcomes these issues, though its effectiveness may be reduced when dealing with non-independent and identically distributed (non-IID) data. This study simulates non-IID conditions by applying different MRI intensity normalization techniques to separate data subsets, reflecting a common cause of heterogeneity. These subsets are then used for training and testing models for brain tumor segmentation. The findings provide insights into the influence of the MRI intensity normalization methods on segmentation models, both training and inference. Notably, the FL methods demonstrated resilience to inconsistently normalized data across clients, achieving the 3D Dice score of 92%, which is comparable to a centralized model (trained using all data). These results indicate that FL is a solution to effectively train high-performing models without violating data privacy, a crucial concern in medical applications. The code is available at: https://github.com/SanoScience/fl-varying-normalization.

Bouthaina Slika, Fadi Dornaika, Fares Bougourzi, Karim Hammoudi

arxiv logopreprintOct 8 2025
Lung infections, particularly pneumonia, pose serious health risks that can escalate rapidly, especially during pandemics. Accurate AI-based severity prediction from medical imaging is essential to support timely clinical decisions and optimize patient outcomes. In this work, we present a novel method applicable to both CT scans and chest X-rays for assessing lung infection severity. Our contributions are twofold: (i) QCross-Att-PVT, a Transformer-based architecture that integrates parallel encoders, a cross-gated attention mechanism, and a feature aggregator to capture rich multi-scale features; and (ii) Conditional Online TransMix, a custom data augmentation strategy designed to address dataset imbalance by generating mixed-label image patches during training. Evaluated on two benchmark datasets, RALO CXR and Per-COVID-19 CT, our method consistently outperforms several state-of-the-art deep learning models. The results emphasize the critical role of data augmentation and gated attention in improving both robustness and predictive accuracy. This approach offers a reliable, adaptable tool to support clinical diagnosis, disease monitoring, and personalized treatment planning. The source code of this work is available at https://github.com/bouthainas/QCross-Att-PVT.

Gatoula P, Diamantis DE, Koulaouzidis A, Carretero C, Chetcuti-Zammit S, Valdivia PC, González-Suárez B, Mussetto A, Plevris J, Robertson A, Rosa B, Toth E, Iakovidis DK

pubmed logopapersOct 8 2025
Synthetic Data Generation (SDG) based on Artificial Intelligence (AI) can transform the way clinical medicine is delivered by overcoming privacy barriers that currently render clinical data sharing difficult. This is the key to accelerating the development of digital tools contributing to enhanced patient safety. Such tools include robust data-driven clinical decision support systems, and example-based digital training tools that will enable healthcare professionals to improve their diagnostic performance for enhanced patient safety. This study focuses on the clinical evaluation of medical SDG, with a proof-of-concept investigation on diagnosing Inflammatory Bowel Disease (IBD) using Wireless Capsule Endoscopy (WCE) images. Its scientific contributions include (a) a novel protocol for the systematic Clinical Evaluation of Medical Image Synthesis (CEMIS); (b) a novel variational autoencoder-based model, named TIDE-II, which enhances its predecessor model, TIDE (This Intestine Does not Exist), for the generation of high-resolution synthetic WCE images; and (c) a comprehensive evaluation of the synthetic images using the CEMIS protocol by 10 international WCE specialists, in terms of image quality, diversity, and realism, as well as their utility for clinical decision-making. The results show that TIDE-II generates clinically plausible, very realistic WCE images, of improved quality compared to relevant state-of-the-art generative models. Concludingly, CEMIS can serve as a reference for future research on medical image-generation techniques, while the adaptation/extension of the architecture of TIDE-II to other imaging domains can be promising.

Jiasong Chen, Linchen Qian, Ruonan Gong, Christina Sun, Tongran Qin, Thuy Pham, Caitlin Martin, Mohammad Zafar, John Elefteriades, Wei Sun, Liang Liang

arxiv logopreprintOct 8 2025
Aortic aneurysm disease ranks consistently in the top 20 causes of death in the U.S. population. Thoracic aortic aneurysm is manifested as an abnormal bulging of thoracic aortic wall and it is a leading cause of death in adults. From the perspective of biomechanics, rupture occurs when the stress acting on the aortic wall exceeds the wall strength. Wall stress distribution can be obtained by computational biomechanical analyses, especially structural Finite Element Analysis. For risk assessment, probabilistic rupture risk of TAA can be calculated by comparing stress with material strength using a material failure model. Although these engineering tools are currently available for TAA rupture risk assessment on patient specific level, clinical adoption has been limited due to two major barriers: labor intensive 3D reconstruction current patient specific anatomical modeling still relies on manual segmentation, making it time consuming and difficult to scale to a large patient population, and computational burden traditional FEA simulations are resource intensive and incompatible with time sensitive clinical workflows. The second barrier was successfully overcome by our team through the development of the PyTorch FEA library and the FEA DNN integration framework. By incorporating the FEA functionalities within PyTorch FEA and applying the principle of static determinacy, we reduced the FEA based stress computation time to approximately three minutes per case. Moreover, by integrating DNN and FEA through the PyTorch FEA library, our approach further decreases the computation time to only a few seconds per case. This work focuses on overcoming the first barrier through the development of an end to end deep neural network capable of generating patient specific finite element meshes of the aorta directly from 3D CT images.
Page 76 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.