Sort by:
Page 32 of 2982975 results

Advances in renal cancer: diagnosis, treatment, and emerging technologies.

Saida T, Iima M, Ito R, Ueda D, Nishioka K, Kurokawa R, Kawamura M, Hirata K, Honda M, Takumi K, Ide S, Sugawara S, Watabe T, Sakata A, Yanagawa M, Sofue K, Oda S, Naganawa S

pubmed logopapersAug 2 2025
This review provides a comprehensive overview of current practices and recent advancements in the diagnosis and treatment of renal cancer. It introduces updates in histological classification and explains the imaging characteristics of each tumour based on these changes. The review highlights state-of-the-art imaging modalities, including magnetic resonance imaging, computed tomography, positron emission tomography, and ultrasound, emphasising their crucial role in tumour characterisation and optimising treatment planning. Emerging technologies, such as radiomics and artificial intelligence, are also discussed for their transformative impact on enhancing diagnostic precision, prognostic prediction, and personalised patient management. Furthermore, the review explores current treatment options, including minimally invasive techniques such as cryoablation, radiofrequency ablation, and stereotactic body radiation therapy, as well as systemic therapies such as immune checkpoint inhibitors and targeted therapies.

Temporal consistency-aware network for renal artery segmentation in X-ray angiography.

Yang B, Li C, Fezzi S, Fan Z, Wei R, Chen Y, Tavella D, Ribichini FL, Zhang S, Sharif F, Tu S

pubmed logopapersAug 2 2025
Accurate segmentation of renal arteries from X-ray angiography videos is crucial for evaluating renal sympathetic denervation (RDN) procedures but remains challenging due to dynamic changes in contrast concentration and vessel morphology across frames. The purpose of this study is to propose TCA-Net, a deep learning model that improves segmentation consistency by leveraging local and global contextual information in angiography videos. Our approach utilizes a novel deep learning framework that incorporates two key modules: a local temporal window vessel enhancement module and a global vessel refinement module (GVR). The local module fuses multi-scale temporal-spatial features to improve the semantic representation of vessels in the current frame, while the GVR module integrates decoupled attention strategies (video-level and object-level attention) and gating mechanisms to refine global vessel information and eliminate redundancy. To further improve segmentation consistency, a temporal perception consistency loss function is introduced during training. We evaluated our model using 195 renal artery angiography sequences for development and tested it on an external dataset from 44 patients. The results demonstrate that TCA-Net achieves an F1-score of 0.8678 for segmenting renal arteries, outperforming existing state-of-the-art segmentation methods. We present TCA-Net, a deep learning-based model that significantly improves segmentation consistency for renal artery angiography videos. By effectively leveraging both local and global temporal contextual information, TCA-Net outperforms current methods and provides a reliable tool for assessing RDN procedures.

Evaluating the Efficacy of Various Deep Learning Architectures for Automated Preprocessing and Identification of Impacted Maxillary Canines in Panoramic Radiographs.

Alenezi O, Bhattacharjee T, Alseed HA, Tosun YI, Chaudhry J, Prasad S

pubmed logopapersAug 2 2025
Previously, automated cropping and a reasonable classification accuracy for distinguishing impacted and non-impacted canines were demonstrated. This study evaluates multiple convolutional neural network (CNN) architectures for improving accuracy as a step towards a fully automated software for identification of impacted maxillary canines (IMCs) in panoramic radiographs (PRs). Eight CNNs (SqueezeNet, GoogLeNet, NASNet-Mobile, ShuffleNet, VGG-16, ResNet 50, DenseNet 201, and Inception V3) were compared in terms of their ability to classify 2 groups of PRs (impacted: n = 91; and non-impacted: n = 91 maxillary canines) before pre-processing and after applying automated cropping. For the PRs with impacted and non-impacted maxillary canines, GoogLeNet achieved the highest classification performance among the tested CNN architectures. Area under the curve (AUC) values of the Receiver Operating Characteristic (ROC) analysis without preprocessing and with preprocessing were 0.9 and 0.99 respectively, compared to 0.84 and 0.96 respectively with SqueezeNet. Among the tested CNN architectures, GoogLeNet achieved the highest performance on this dataset for the automated identification of impacted maxillary canines on both cropped and uncropped PRs.

Artificial Intelligence in Abdominal, Gynecological, Obstetric, Musculoskeletal, Vascular and Interventional Ultrasound.

Graumann O, Cui Xin W, Goudie A, Blaivas M, Braden B, Campbell Westerway S, Chammas MC, Dong Y, Gilja OH, Hsieh PC, Jiang Tian A, Liang P, Möller K, Nolsøe CP, Săftoiu A, Dietrich CF

pubmed logopapersAug 2 2025
Artificial Intelligence (AI) is a theoretical framework and systematic development of computational models designed to execute tasks that traditionally require human cognition. In medical imaging, AI is used for various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and pathologies across multiple organ systems. However, integrating AI into medical ultrasound presents unique challenges compared to modalities like CT and MRI due to its operator-dependent nature and inherent variability in the image acquisition process. AI application to ultrasound holds the potential to mitigate multiple variabilities, recalibrate interpretative consistency, and uncover diagnostic patterns that may be difficult for humans to detect. Progress has led to significant innovation in medical ultrasound-based AI applications, facilitating their adoption in various clinical settings and for multiple diseases. This manuscript primarily aims to provide a concise yet comprehensive exploration of current and emerging AI applications in medical ultrasound within abdominal, musculoskeletal, and obstetric & gynecological and interventional medical ultrasound. The secondary aim is to discuss present limitations and potential challenges such technological implementations may encounter.

[Tips and tricks for the cytological management of cysts].

Lacoste-Collin L, Fabre M

pubmed logopapersAug 2 2025
Fine needle aspiration is a well-known procedure for the diagnosis and management of solid lesions. The approach to cystic lesions on fine needle-aspiration is becoming a popular diagnostic tool due to the increased availability of high-quality cross-sectional imaging such as computed tomography and ultrasound guided procedures like endoscopic ultrasound. Cystic lesions are closed cavities containing liquid, sometimes partially solid with various internal neoplastic and non-neoplastic components. The most frequently punctured cysts are in the neck (thyroid and salivary glands), mediastinum, breast and abdomen (pancreas and liver). The diagnostic accuracy of cytological cyst sampling is highly dependent on laboratory material management. This review highlights how to approach the main features of superficial and deep organ cysts using basic cytological techniques (direct smears, cytocentrifugation), liquid-based cytology and cell block. We show the role of a multimodal approach that can lead to a wider implementation of ancillary tests (biochemical, immunocytochemical and molecular) to improve diagnostic accuracy and clinical management of patients with cystic lesions. In the near future, artificial intelligence models will offer detection, classification and prediction capabilities for various cystic lesions. Two examples in pancreatic and thyroid cytopathology are particularly developed.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.
Page 32 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.