Sort by:
Page 86 of 100993 results

The Future of Urodynamics: Innovations, Challenges, and Possibilities.

Chew LE, Hannick JH, Woo LL, Weaver JK, Damaser MS

pubmed logopapersMay 14 2025
Urodynamic studies (UDS) are essential for evaluating lower urinary tract function but are limited by patient discomfort, lack of standardization and diagnostic variability. Advances in technology aim to address these challenges and improve diagnostic accuracy and patient comfort. AUM offers physiological assessment by allowing natural bladder filling and monitoring during daily activities. Compared to conventional UDS, AUM demonstrates higher sensitivity for detecting detrusor overactivity and underlying pathophysiology. However, it faces challenges like motion artifacts, catheter-related discomfort, and difficulty measuring continuous bladder volume. Emerging devices such as Urodynamics Monitor and UroSound offer more patient-friendly alternatives. These tools have the potential to improve diagnostic accuracy for bladder pressure and voiding metrics but remain limited and still require further validation and testing. Ultrasound-based modalities, including dynamic ultrasonography and shear wave elastography, provide real-time, noninvasive assessment of bladder structure and function. These modalities are promising but will require further development of standardized protocols. AI and machine learning models enhance diagnostic accuracy and reduce variability in UDS interpretation. Applications include detecting detrusor overactivity and distinguishing bladder outlet obstruction from detrusor underactivity. However, further validation is required for clinical adoption. Advances in AUM, wearable technologies, ultrasonography, and AI demonstrate potential for transforming UDS into a more accurate, patient-centered tool. Despite significant progress, challenges like technical complexity, standardization, and cost-effectiveness must be addressed to integrate these innovations into routine practice. Nonetheless, these technologies provide the possibility of a future of improved diagnosis and treatment of lower urinary tract dysfunction.

Synthetic Data-Enhanced Classification of Prevalent Osteoporotic Fractures Using Dual-Energy X-Ray Absorptiometry-Based Geometric and Material Parameters.

Quagliato L, Seo J, Hong J, Lee T, Chung YS

pubmed logopapersMay 14 2025
Bone fracture risk assessment for osteoporotic patients is essential for implementing early countermeasures and preventing discomfort and hospitalization. Current methodologies, such as Fracture Risk Assessment Tool (FRAX), provide a risk assessment over a 5- to 10-year period rather than evaluating the bone's current health status. The database was collected by Ajou University Medical Center from 2017 to 2021. It included 9,260 patients, aged 55 to 99, comprising 242 femur fracture (FX) cases and 9,018 non-fracture (NFX) cases. To model the association of the bone's current health status with prevalent FXs, three prediction algorithms-extreme gradient boosting (XGB), support vector machine, and multilayer perceptron-were trained using two-dimensional dual-energy X-ray absorptiometry (2D-DXA) analysis results and subsequently benchmarked. The XGB classifier, which proved most effective, was then further refined using synthetic data generated by the adaptive synthetic oversampler to balance the FX and NFX classes and enhance boundary sharpness for better classification accuracy. The XGB model trained on raw data demonstrated good prediction capabilities, with an area under the curve (AUC) of 0.78 and an F1 score of 0.71 on test cases. The inclusion of synthetic data improved classification accuracy in terms of both specificity and sensitivity, resulting in an AUC of 0.99 and an F1 score of 0.98. The proposed methodology demonstrates that current bone health can be assessed through post-processed results from 2D-DXA analysis. Moreover, it was also shown that synthetic data can help stabilize uneven databases by balancing majority and minority classes, thereby significantly improving classification performance.

Multi-Task Deep Learning for Predicting Metabolic Syndrome from Retinal Fundus Images in a Japanese Health Checkup Dataset

Itoh, T., Nishitsuka, K., Fukuma, Y., Wada, S.

medrxiv logopreprintMay 14 2025
BackgroundRetinal fundus images provide a noninvasive window into systemic health, offering opportunities for early detection of metabolic disorders such as metabolic syndrome (METS). ObjectiveThis study aimed to develop a deep learning model to predict METS from fundus images obtained during routine health checkups, leveraging a multi-task learning approach. MethodsWe retrospectively analyzed 5,000 fundus images from Japanese health checkup participants. Convolutional neural network (CNN) models were trained to classify METS status, incorporating fundus-specific data augmentation strategies and auxiliary regression tasks targeting clinical parameters such as abdominal circumference (AC). Model performance was evaluated using validation accuracy, test accuracy, and the area under the receiver operating characteristic curve (AUC). ResultsModels employing fundus-specific augmentation demonstrated more stable convergence and superior validation accuracy compared to general-purpose augmentation. Incorporating AC as an auxiliary task further enhanced performance across architectures. The final ensemble model with test-time augmentation achieved a test accuracy of 0.696 and an AUC of 0.73178. ConclusionCombining multi-task learning, fundus-specific data augmentation, and ensemble prediction substantially improves deep learning-based METS classification from fundus images. This approach may offer a practical, noninvasive screening tool for metabolic syndrome in general health checkup settings.

A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma.

Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D

pubmed logopapersMay 14 2025
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.

Explainability Through Human-Centric Design for XAI in Lung Cancer Detection

Amy Rafferty, Rishi Ramaesh, Ajitha Rajan

arxiv logopreprintMay 14 2025
Deep learning models have shown promise in lung pathology detection from chest X-rays, but widespread clinical adoption remains limited due to opaque model decision-making. In prior work, we introduced ClinicXAI, a human-centric, expert-guided concept bottleneck model (CBM) designed for interpretable lung cancer diagnosis. We now extend that approach and present XpertXAI, a generalizable expert-driven model that preserves human-interpretable clinical concepts while scaling to detect multiple lung pathologies. Using a high-performing InceptionV3-based classifier and a public dataset of chest X-rays with radiology reports, we compare XpertXAI against leading post-hoc explainability methods and an unsupervised CBM, XCBs. We assess explanations through comparison with expert radiologist annotations and medical ground truth. Although XpertXAI is trained for multiple pathologies, our expert validation focuses on lung cancer. We find that existing techniques frequently fail to produce clinically meaningful explanations, omitting key diagnostic features and disagreeing with radiologist judgments. XpertXAI not only outperforms these baselines in predictive accuracy but also delivers concept-level explanations that better align with expert reasoning. While our focus remains on explainability in lung cancer detection, this work illustrates how human-centric model design can be effectively extended to broader diagnostic contexts - offering a scalable path toward clinically meaningful explainable AI in medical diagnostics.

Recognizing artery segments on carotid ultrasonography using embedding concatenation of deep image and vision-language models.

Lo CM, Sung SF

pubmed logopapersMay 14 2025
Evaluating large artery atherosclerosis is critical for predicting and preventing ischemic strokes. Ultrasonographic assessment of the carotid arteries is the preferred first-line examination due to its ease of use, noninvasive, and absence of radiation exposure. This study proposed an automated classification model for the common carotid artery (CCA), carotid bulb, internal carotid artery (ICA), and external carotid artery (ECA) to enhance the quantification of carotid artery examinations.&#xD;Approach: A total of 2,943 B-mode ultrasound images (CCA: 1,563; bulb: 611; ICA: 476; ECA: 293) from 288 patients were collected. Three distinct sets of embedding features were extracted from artificial intelligence networks including pre-trained DenseNet201, vision Transformer (ViT), and echo contrastive language-image pre-training (EchoCLIP) models using deep learning architectures for pattern recognition. These features were then combined in a support vector machine (SVM) classifier to interpret the anatomical structures in B-mode images.&#xD;Main results: After ten-fold cross-validation, the model achieved an accuracy of 82.3%, which was significantly better than using individual feature sets, with a p-value of <0.001.&#xD;Significance: The proposed model could make carotid artery examinations more accurate and consistent with the achieved classification accuracy. The source code is available at https://github.com/buddykeywordw/Artery-Segments-Recognition&#xD.

Early detection of Alzheimer's disease progression stages using hybrid of CNN and transformer encoder models.

Almalki H, Khadidos AO, Alhebaishi N, Senan EM

pubmed logopapersMay 14 2025
Alzheimer's disease (AD) is a neurodegenerative disorder that affects memory and cognitive functions. Manual diagnosis is prone to human error, often leading to misdiagnosis or delayed detection. MRI techniques help visualize the fine tissues of the brain cells, indicating the stage of disease progression. Artificial intelligence techniques analyze MRI with high accuracy and extract subtle features that are difficult to diagnose manually. In this study, a modern methodology was designed that combines the power of CNN models (ResNet101 and GoogLeNet) to extract local deep features and the power of Vision Transformer (ViT) models to extract global features and find relationships between image spots. First, the MRI images of the Open Access Imaging Studies Series (OASIS) dataset were improved by two filters: the adaptive median filter (AMF) and Laplacian filter. The ResNet101 and GoogLeNet models were modified to suit the feature extraction task and reduce computational cost. The ViT architecture was modified to reduce the computational cost while increasing the number of attention vertices to further discover global features and relationships between image patches. The enhanced images were fed into the proposed ViT-CNN methodology. The enhanced images were fed to the modified ResNet101 and GoogLeNet models to extract the deep feature maps with high accuracy. Deep feature maps were fed into the modified ViT model. The deep feature maps were partitioned into 32 feature maps using ResNet101 and 16 feature maps using GoogLeNet, both with a size of 64 features. The feature maps were encoded to recognize the spatial arrangement of the patch and preserve the relationship between patches, helping the self-attention layers distinguish between patches based on their positions. They were fed to the transformer encoder, which consisted of six blocks and multiple vertices to focus on different patterns or regions simultaneously. Finally, the MLP classification layers classify each image into one of four dataset classes. The improved ResNet101-ViT hybrid methodology outperformed the GoogLeNet-ViT hybrid methodology. ResNet101-ViT achieved 98.7% accuracy, 95.05% AUC, 96.45% precision, 99.68% sensitivity, and 97.78% specificity.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study.

Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y

pubmed logopapersMay 14 2025
This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.

Clinical utility of ultrasound and MRI in rheumatoid arthritis: An expert review.

Kellner DA, Morris NT, Lee SM, Baker JF, Chu P, Ranganath VK, Kaeley GS, Yang HH

pubmed logopapersMay 14 2025
Musculoskeletal ultrasound (MSUS) and magnetic resonance imaging (MRI) are advanced imaging techniques that are increasingly important in the diagnosis and management of rheumatoid arthritis (RA) and have significantly enhanced the rheumatologist's ability to assess RA disease activity and progression. This review serves as a five-year update to our previous publication on the contemporary role of imaging in RA, emphasizing the continued importance of MSUS and MRI in clinical practice and their expanding utility. The review examines the role of MSUS in diagnosing RA, differentiating RA from mimickers, scoring systems and quality control measures, novel longitudinal approaches to disease monitoring, and patient populations that may benefit most from MSUS. It also examines the role of MRI in diagnosing pre-clinical and early RA, disease activity monitoring, research and clinical trials, and development of alternative scoring approaches utilizing artificial intelligence. Finally, the role of MRI in RA diagnosis and management is summarized, and selected practice points offer key tips for integrating MSUS and MRI into clinical practice.
Page 86 of 100993 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.