Sort by:
Page 81 of 3493486 results

Machine learning driven diagnostic pathway for clinically significant prostate cancer: the role of micro-ultrasound.

Saitta C, Buffi N, Avolio P, Beatrici E, Paciotti M, Lazzeri M, Fasulo V, Cella L, Garofano G, Piccolini A, Contieri R, Nazzani S, Silvani C, Catanzaro M, Nicolai N, Hurle R, Casale P, Saita A, Lughezzani G

pubmed logopapersAug 18 2025
Detecting clinically significant prostate cancer (csPCa) remains a top priority in delivering high-quality care, yet consensus on an optimal diagnostic pathway is constantly evolving. In this study, we present an innovative diagnostic approach, leveraging a machine learning model tailored to the emerging role of prostate micro-ultrasound (micro-US) in the setting of csPCa diagnosis. We queried our prospective database for patients who underwent Micro-US for a clinical suspicious of prostate cancer. CsPCa was defined as any Gleason group grade > 1. Primary outcome was the development of a diagnostic pathway which implements clinical and radiological findings using machine learning algorithm. The dataset was divided into training (70%) and testing subsets. Boruta algorithms was used for variable selection, then based on the importance coefficients multivariable logistic regression model (MLR) was fitted to predict csPCA. Classification and Regression Tree (CART) model was fitted to create the decision tree. Accuracy of the model was tested using receiver characteristic curve (ROC) analysis using estimated area under the curve (AUC). Overall, 1422 patients were analysed. Multivariable LR revealed PRI-MUS score ≥ 3 (OR 4.37, p < 0.001), PI-RADS score ≥ 3 (OR 2.01, p < 0.001), PSA density ≥ 0.15 (OR 2.44, p < 0.001), DRE (OR 1.93, p < 0.001), anterior lesions (OR 1.49, p = 0.004), prostate cancer familiarity (OR 1.54, p = 0.005) and increasing age (OR 1.031, p < 0.001) as the best predictors for csPCa, demonstrating an AUC in the validation cohort of 83%, 78% sensitivity, 72.1% specificity and 81% negative predictive value. CART analysis revealed elevated PRIMUS score as the main node to stratify our cohort. By integrating clinical features, serum biomarkers, and imaging findings, we have developed a point of care model that accurately predicts the presence of csPCa. Our findings support a paradigm shift towards adopting MicroUS as a first level diagnostic tool for csPCa detection, potentially optimizing clinical decision making. This approach could improve the identification of patients at higher risk for csPca and guide the selection of the most appropriate diagnostic exams. External validation is essential to confirm these results.

Modeling the MRI gradient system with a temporal convolutional network: Improved reconstruction by prediction of readout gradient errors.

Martin JB, Alderson HE, Gore JC, Does MD, Harkins KD

pubmed logopapersAug 18 2025
Our objective is to develop a general, nonlinear gradient system model that can accurately predict gradient distortions using convolutional networks. A set of training gradient waveforms were measured on a small animal imaging system and used to train a temporal convolutional network to predict the gradient waveforms produced by the imaging system. The trained network was able to accurately predict nonlinear distortions produced by the gradient system. Network prediction of gradient waveforms was incorporated into the image reconstruction pipeline and provided improvements in image quality and diffusion parameter mapping compared to both the nominal gradient waveform and the gradient impulse response function. Temporal convolutional networks can more accurately model gradient system behavior than existing linear methods and may be used to retrospectively correct gradient errors.

Advancing deep learning-based segmentation for multiple lung cancer lesions in real-world multicenter CT scans.

Rafael-Palou X, Jimenez-Pastor A, Martí-Bonmatí L, Muñoz-Nuñez CF, Laudazi M, Alberich-Bayarri Á

pubmed logopapersAug 18 2025
Accurate segmentation of lung cancer lesions in computed tomography (CT) is essential for precise diagnosis, personalized therapy planning, and treatment response assessment. While automatic segmentation of the primary lung lesion has been widely studied, the ability to segment multiple lesions per patient remains underexplored. In this study, we address this gap by introducing a novel, automated approach for multi-instance segmentation of lung cancer lesions, leveraging a heterogeneous cohort with real-world multicenter data. We analyzed 1,081 retrospectively collected CT scans with 5,322 annotated lesions (4.92 ± 13.05 lesions per scan). The cohort was stratified into training (n = 868) and testing (n = 213) subsets. We developed an automated three-step pipeline, including thoracic bounding box extraction, multi-instance lesion segmentation, and false positive reduction via a novel multiscale cascade classifier to filter spurious and non-lesion candidates. On the independent test set, our method achieved a Dice similarity coefficient of 76% for segmentation and a lesion detection sensitivity of 85%. When evaluated on an external dataset of 188 real-world cases, it achieved a Dice similarity coefficient of 73%, and a lesion detection sensitivity of 85%. Our approach accurately detected and segmented multiple lung cancer lesions per patient on CT scans, demonstrating robustness across an independent test set and an external real-world dataset. AI-driven segmentation comprehensively captures lesion burden, enhancing lung cancer assessment and disease monitoring KEY POINTS: Automatic multi-instance lung cancer lesion segmentation is underexplored yet crucial for disease assessment. Developed a deep learning-based segmentation pipeline trained on multi-center real-world data, which reached 85% sensitivity at external validation. Thoracic bounding box and false positive reduction techniques improved the pipeline's segmentation performance.

Development of a lung perfusion automated quantitative model based on dual-energy CT pulmonary angiography in patients with chronic pulmonary thromboembolism.

Xi L, Wang J, Liu A, Ni Y, Du J, Huang Q, Li Y, Wen J, Wang H, Zhang S, Zhang Y, Zhang Z, Wang D, Xie W, Gao Q, Cheng Y, Zhai Z, Liu M

pubmed logopapersAug 18 2025
To develop PerAIDE, an AI-driven system for automated analysis of pulmonary perfusion blood volume (PBV) using dual-energy computed tomography pulmonary angiography (DE-CTPA) in patients with chronic pulmonary thromboembolism (CPE). In this prospective observational study, 32 patients with chronic thromboembolic pulmonary disease (CTEPD) and 151 patients with chronic thromboembolic pulmonary hypertension (CTEPH) were enrolled between January 2022 and July 2024. PerAIDE was developed to automatically quantify three distinct perfusion patterns-normal, reduced, and defective-on DE-CTPA images. Two radiologists independently assessed PBV scores. Follow-up imaging was conducted 3 months after balloon pulmonary angioplasty (BPA). PerAIDE demonstrated high agreement with the radiologists (intraclass correlation coefficient = 0.778) and reduced analysis time significantly (31 ± 3 s vs. 15 ± 4 min, p < 0.001). CTEPH patients had greater perfusion defects than CTEPD (0.35 vs. 0.29, p < 0.001), while reduced perfusion was more prevalent in CTEPD (0.36 vs. 0.30, p < 0.001). Perfusion defects correlated positively with pulmonary vascular resistance (ρ = 0.534) and mean pulmonary artery pressure (ρ = 0.482), and negatively with oxygenation index (ρ = -0.441). PerAIDE effectively differentiated CTEPH from CTEPD (AUC = 0.809, 95% CI: 0.745-0.863). At the 3-month post-BPA, a significant reduction in perfusion defects was observed (0.36 vs. 0.33, p < 0.01). CTEPD and CTEPH exhibit distinct perfusion phenotypes on DE-CTPA. PerAIDE reliably quantifies perfusion abnormalities and correlates strongly with clinical and hemodynamic markers of CPE severity. ClinicalTrials.gov, NCT06526468. Registered 28 August 2024- Retrospectively registered, https://clinicaltrials.gov/study/NCT06526468?cond=NCT06526468&rank=1 . PerAIDE is a dual-energy computed tomography pulmonary angiography (DE-CTPA) AI-driven system that rapidly and accurately assesses perfusion blood volume in patients with chronic pulmonary thromboembolism, effectively distinguishing between CTEPD and CTEPH phenotypes and correlating with disease severity and therapeutic response. Right heart catheterization for definitive diagnosis of chronic pulmonary thromboembolism (CPE) is invasive. PerAIDE-based perfusion defects correlated with disease severity to aid CPE-treatment assessment. CTEPH demonstrates severe perfusion defects, while CTEPD displays predominantly reduced perfusion. PerAIDE employs a U-Net-based adaptive threshold method, which achieves alignment with and faster processing relative to manual evaluation.

Craniocaudal Mammograms Generation using Image-to-Image Translation Techniques.

Piras V, Bonatti AF, De Maria C, Cignoni P, Banterle F

pubmed logopapersAug 18 2025
Breast cancer is the leading cause of cancer death in women worldwide, emphasizing the need for prevention and early detection. Mammography screening plays a crucial role in secondary prevention, but large datasets of referred mammograms from hospital databases are hard to access due to privacy concerns, and publicly available datasets are often unreliable and unbalanced. We propose a novel workflow using a statistical generative model based on generative adversarial networks to generate high-resolution synthetic mammograms. Utilizing a unique 2D parametric model of the compressed breast in craniocaudal projection and image-to-image translation techniques, our approach allows full and precise control over breast features and the generation of both normal and tumor cases. Quality assessment was conducted through visual analysis, and statistical analysis using the first five statistical moments. Additionally a questionnaire was administered to 45 medical experts (radiologists and radiology residents). The results showed that the features of the real mammograms were accurately replicated in the synthetic ones, the image statistics overall correspond reasonably well, and the two groups of images were statistically indistinguishable in almost all cases according to the experts. The proposed workflow generates realistic synthetic mammograms with fine-tuned features. Synthetic mammograms are powerful tools that can create new or balance existing datasets, allowing for the training of machine learning and deep learning algorithms. These algorithms can then assist radiologists in tasks like classification and segmentation, improving diagnostic performance. The code and dataset are available at: https://github.com/cnr-isti-vclab/CC-Mammograms-Generation_GUI.

PAINT: Prior-aided Alternate Iterative NeTwork for Ultra-low-dose CT Imaging Using Diffusion Model-restored Sinogram.

Chen K, Zhang W, Deng Z, Zhou Y, Zhao J

pubmed logopapersAug 18 2025
Obtaining multiple CT scans from the same patient is required in many clinical scenarios, such as lung nodule screening and image-guided radiation therapy. Repeated scans would expose patients to higher radiation dose and increase the risk of cancer. In this study, we aim to achieve ultra-low-dose imaging for subsequent scans by collecting extremely undersampled sinogram via regional few-view scanning, and preserve image quality utilizing the preceding fullsampled scan as prior. To fully exploit prior information, we propose a two-stage framework consisting of diffusion model-based sinogram restoration and deep learning-based unrolled iterative reconstruction. Specifically, the undersampled sinogram is first restored by a conditional diffusion model with sinogram-domain prior guidance. Then, we formulate the undersampled data reconstruction problem as an optimization problem combining fidelity terms for both undersampled and restored data, along with a regularization term based on image-domain prior. Next, we propose Prior-aided Alternate Iterative NeTwork (PAINT) to solve the optimization problem. PAINT alternately updates the undersampled or restored data fidelity term, and unrolls the iterations to integrate neural network-based prior regularization. In the case of 112 mm field of view in simulated data experiments, our proposed framework achieved superior performance in terms of CT value accuracy and image details preservation. Clinical data experiments also demonstrated that our proposed framework outperformed the comparison methods in artifact reduction and structure recovery.

Interactive AI annotation of medical images in a virtual reality environment.

Orsmaa L, Saukkoriipi M, Kangas J, Rasouli N, Järnstedt J, Mehtonen H, Sahlsten J, Jaskari J, Kaski K, Raisamo R

pubmed logopapersAug 18 2025
Artificial intelligence (AI) achieves high-quality annotations of radiological images, yet often lacks the robustness required in clinical practice. Interactive annotation starts with an AI-generated delineation, allowing radiologists to refine it with feedback, potentially improving precision and reliability. These techniques have been explored in two-dimensional desktop environments, but are not validated by radiologists or integrated with immersive visualization technologies. We used a Virtual Reality (VR) system to determine whether (1) the annotation quality improves when radiologists can edit the AI annotation and (2) whether the extra work done by editing is worthwhile. We evaluated the clinical feasibility of an interactive VR approach to annotate mandibular and mental foramina on segmented 3D mandibular models. Three experienced dentomaxillofacial radiologists reviewed AI-generated annotations and, when needed, refined them at the voxel level in 3D space through click-based interactions until clinical standards were met. Our results indicate that integrating expert feedback within an immersive VR environment enhances annotation accuracy, improves clinical usability, and offers valuable insights for developing medical image analysis systems incorporating radiologist input. This study is the first to compare the quality of original and interactive AI annotation and to use radiologists' opinions as the measure. More research is needed for generalization.

MCBL-UNet: A Hybrid Mamba-CNN Boundary Enhanced Light-weight UNet for Placenta Ultrasound Image Segmentation.

Jiang C, Zhu C, Guo H, Tan G, Liu C, Li K

pubmed logopapersAug 18 2025
The shape and size of the placenta are closely related to fetal development in the second and third trimesters of pregnancy. Accurately segmenting the placental contour in ultrasound images is a challenge because it is limited by image noise, fuzzy boundaries, and tight clinical resources. To address these issues, we propose MCBL-UNet, a novel lightweight segmentation framework that combines the long-range modeling capabilities of Mamba and the local feature extraction advantages of convolutional neural networks (CNNs) to achieve efficient segmentation through multi-information fusion. Based on a compact 6-layer U-Net architecture, MCBL-UNet introduces several key modules: a boundary enhancement module (BEM) to extract fine-grained edge and texture features; a multi-dimensional global context module (MGCM) to capture global semantics and edge information in the deep stages of the encoder and decoder; and a parallel channel spatial attention module (PCSAM) to suppress redundant information in skip connections while enhancing spatial and channel correlations. To further improve feature reconstruction and edge preservation capabilities, we introduce an attention downsampling module (ADM) and a content-aware upsampling module (CUM). MCBL-UNet has achieved excellent segmentation performance on multiple medical ultrasound datasets (placenta, gestational sac, thyroid nodules). Using only 1.31M parameters and 1.26G FLOPs, the model outperforms 13 existing mainstream methods in key indicators such as Dice coefficient and mIoU, showing a perfect balance between high accuracy and low computational cost. This model is not only suitable for resource-constrained clinical environments, but also provides a new idea for introducing the Mamba structure into medical image segmentation.

Toward ICE-XRF fusion: real-time pose estimation of the intracardiac echo probe in 2D X-ray using deep learning.

Severens A, Meijs M, Pai Raikar V, Lopata R

pubmed logopapersAug 18 2025
Valvular heart disease affects 2.5% of the general population and 10% of people aged over 75, with many patients untreated due to high surgical risks. Transcatheter valve therapies offer a safer, less invasive alternative but rely on ultrasound and X-ray image guidance. The current ultrasound technique for valve interventions, transesophageal echocardiography (TEE), requires general anesthesia and has poor visibility of the right side of the heart. Intracardiac echocardiography (ICE) provides improved 3D imaging without the need for general anesthesia but faces challenges in adoption due to device handling and operator training. To facilitate the use of ICE in the clinic, the fusion of ultrasound and X-ray is proposed. This study introduces a two-stage detection algorithm using deep learning to support ICE-XRF fusion. Initially, the ICE probe is coarsely detected using an object detection network. This is followed by 5-degree-of-freedom (DoF) pose estimation of the ICE probe using a regression network. Model validation using synthetic data and seven clinical cases showed that the framework provides accurate probe detection and 5-DoF pose estimation. For the object detection, an F1 score of 1.00 was achieved on synthetic data and high precision (0.97) and recall (0.83) for clinical cases. For the 5-DoF pose estimation, median position errors were found under 0.5mm and median rotation errors below <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>7</mn> <mo>.</mo> <msup><mn>2</mn> <mo>∘</mo></msup> </mrow> </math> . This real-time detection method supports image fusion of ICE and XRF during clinical procedures and facilitates the use of ICE in valve therapy.

Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.

Saadh MJ, Albadr RJ, Sur D, Yadav A, Roopashree R, Sangwan G, Krithiga T, Aminov Z, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersAug 18 2025
This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols. The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC). For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets. The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.
Page 81 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.