Sort by:
Page 253 of 6576562 results

Dominic LaBella, Keshav Jha, Jared Robbins, Esther Yu

arxiv logopreprintAug 18 2025
Cone-beam computed tomography (CBCT) has become an invaluable imaging modality in dentistry, enabling 3D visualization of teeth and surrounding structures for diagnosis and treatment planning. Automated segmentation of dental structures in CBCT can efficiently assist in identifying pathology (e.g., pulpal or periapical lesions) and facilitate radiation therapy planning in head and neck cancer patients. We describe the DLaBella29 team's approach for the MICCAI 2025 ToothFairy3 Challenge, which involves a deep learning pipeline for multi-class tooth segmentation. We utilized the MONAI Auto3DSeg framework with a 3D SegResNet architecture, trained on a subset of the ToothFairy3 dataset (63 CBCT scans) with 5-fold cross-validation. Key preprocessing steps included image resampling to 0.6 mm isotropic resolution and intensity clipping. We applied an ensemble fusion using Multi-Label STAPLE on the 5-fold predictions to infer a Phase 1 segmentation and then conducted tight cropping around the easily segmented Phase 1 mandible to perform Phase 2 segmentation on the smaller nerve structures. Our method achieved an average Dice of 0.87 on the ToothFairy3 challenge out-of-sample validation set. This paper details the clinical context, data preparation, model development, results of our approach, and discusses the relevance of automated dental segmentation for improving patient care in radiation oncology.

Yizhou Liu, Jingwei Wei, Zizhi Chen, Minghao Han, Xukun Zhang, Keliang Liu, Lihua Zhang

arxiv logopreprintAug 18 2025
Reinforcement learning (RL) with rule-based rewards has demonstrated strong potential in enhancing the reasoning and generalization capabilities of vision-language models (VLMs) and large language models (LLMs), while reducing computational overhead. However, its application in medical imaging remains underexplored. Existing reinforcement fine-tuning (RFT) approaches in this domain primarily target closed-ended visual question answering (VQA), limiting their applicability to real-world clinical reasoning. In contrast, open-ended medical VQA better reflects clinical practice but has received limited attention. While some efforts have sought to unify both formats via semantically guided RL, we observe that model-based semantic rewards often suffer from reward collapse, where responses with significant semantic differences receive similar scores. To address this, we propose ARMed (Adaptive Reinforcement for Medical Reasoning), a novel RL framework for open-ended medical VQA. ARMed first incorporates domain knowledge through supervised fine-tuning (SFT) on chain-of-thought data, then applies reinforcement learning with textual correctness and adaptive semantic rewards to enhance reasoning quality. We evaluate ARMed on six challenging medical VQA benchmarks. Results show that ARMed consistently boosts both accuracy and generalization, achieving a 32.64% improvement on in-domain tasks and an 11.65% gain on out-of-domain benchmarks. These results highlight the critical role of reward discriminability in medical RL and the promise of semantically guided rewards for enabling robust and clinically meaningful multimodal reasoning.

Jiayi Wang, Hadrien Reynaud, Franciskus Xaverius Erick, Bernhard Kainz

arxiv logopreprintAug 18 2025
Generative modelling of entire CT volumes conditioned on clinical reports has the potential to accelerate research through data augmentation, privacy-preserving synthesis and reducing regulator-constraints on patient data while preserving diagnostic signals. With the recent release of CT-RATE, a large-scale collection of 3D CT volumes paired with their respective clinical reports, training large text-conditioned CT volume generation models has become achievable. In this work, we introduce CTFlow, a 0.5B latent flow matching transformer model, conditioned on clinical reports. We leverage the A-VAE from FLUX to define our latent space, and rely on the CT-Clip text encoder to encode the clinical reports. To generate consistent whole CT volumes while keeping the memory constraints tractable, we rely on a custom autoregressive approach, where the model predicts the first sequence of slices of the volume from text-only, and then relies on the previously generated sequence of slices and the text, to predict the following sequence. We evaluate our results against state-of-the-art generative CT model, and demonstrate the superiority of our approach in terms of temporal coherence, image diversity and text-image alignment, with FID, FVD, IS scores and CLIP score.

Saitta C, Buffi N, Avolio P, Beatrici E, Paciotti M, Lazzeri M, Fasulo V, Cella L, Garofano G, Piccolini A, Contieri R, Nazzani S, Silvani C, Catanzaro M, Nicolai N, Hurle R, Casale P, Saita A, Lughezzani G

pubmed logopapersAug 18 2025
Detecting clinically significant prostate cancer (csPCa) remains a top priority in delivering high-quality care, yet consensus on an optimal diagnostic pathway is constantly evolving. In this study, we present an innovative diagnostic approach, leveraging a machine learning model tailored to the emerging role of prostate micro-ultrasound (micro-US) in the setting of csPCa diagnosis. We queried our prospective database for patients who underwent Micro-US for a clinical suspicious of prostate cancer. CsPCa was defined as any Gleason group grade > 1. Primary outcome was the development of a diagnostic pathway which implements clinical and radiological findings using machine learning algorithm. The dataset was divided into training (70%) and testing subsets. Boruta algorithms was used for variable selection, then based on the importance coefficients multivariable logistic regression model (MLR) was fitted to predict csPCA. Classification and Regression Tree (CART) model was fitted to create the decision tree. Accuracy of the model was tested using receiver characteristic curve (ROC) analysis using estimated area under the curve (AUC). Overall, 1422 patients were analysed. Multivariable LR revealed PRI-MUS score ≥ 3 (OR 4.37, p < 0.001), PI-RADS score ≥ 3 (OR 2.01, p < 0.001), PSA density ≥ 0.15 (OR 2.44, p < 0.001), DRE (OR 1.93, p < 0.001), anterior lesions (OR 1.49, p = 0.004), prostate cancer familiarity (OR 1.54, p = 0.005) and increasing age (OR 1.031, p < 0.001) as the best predictors for csPCa, demonstrating an AUC in the validation cohort of 83%, 78% sensitivity, 72.1% specificity and 81% negative predictive value. CART analysis revealed elevated PRIMUS score as the main node to stratify our cohort. By integrating clinical features, serum biomarkers, and imaging findings, we have developed a point of care model that accurately predicts the presence of csPCa. Our findings support a paradigm shift towards adopting MicroUS as a first level diagnostic tool for csPCa detection, potentially optimizing clinical decision making. This approach could improve the identification of patients at higher risk for csPca and guide the selection of the most appropriate diagnostic exams. External validation is essential to confirm these results.

Martin JB, Alderson HE, Gore JC, Does MD, Harkins KD

pubmed logopapersAug 18 2025
Our objective is to develop a general, nonlinear gradient system model that can accurately predict gradient distortions using convolutional networks. A set of training gradient waveforms were measured on a small animal imaging system and used to train a temporal convolutional network to predict the gradient waveforms produced by the imaging system. The trained network was able to accurately predict nonlinear distortions produced by the gradient system. Network prediction of gradient waveforms was incorporated into the image reconstruction pipeline and provided improvements in image quality and diffusion parameter mapping compared to both the nominal gradient waveform and the gradient impulse response function. Temporal convolutional networks can more accurately model gradient system behavior than existing linear methods and may be used to retrospectively correct gradient errors.

Rafael-Palou X, Jimenez-Pastor A, Martí-Bonmatí L, Muñoz-Nuñez CF, Laudazi M, Alberich-Bayarri Á

pubmed logopapersAug 18 2025
Accurate segmentation of lung cancer lesions in computed tomography (CT) is essential for precise diagnosis, personalized therapy planning, and treatment response assessment. While automatic segmentation of the primary lung lesion has been widely studied, the ability to segment multiple lesions per patient remains underexplored. In this study, we address this gap by introducing a novel, automated approach for multi-instance segmentation of lung cancer lesions, leveraging a heterogeneous cohort with real-world multicenter data. We analyzed 1,081 retrospectively collected CT scans with 5,322 annotated lesions (4.92 ± 13.05 lesions per scan). The cohort was stratified into training (n = 868) and testing (n = 213) subsets. We developed an automated three-step pipeline, including thoracic bounding box extraction, multi-instance lesion segmentation, and false positive reduction via a novel multiscale cascade classifier to filter spurious and non-lesion candidates. On the independent test set, our method achieved a Dice similarity coefficient of 76% for segmentation and a lesion detection sensitivity of 85%. When evaluated on an external dataset of 188 real-world cases, it achieved a Dice similarity coefficient of 73%, and a lesion detection sensitivity of 85%. Our approach accurately detected and segmented multiple lung cancer lesions per patient on CT scans, demonstrating robustness across an independent test set and an external real-world dataset. AI-driven segmentation comprehensively captures lesion burden, enhancing lung cancer assessment and disease monitoring KEY POINTS: Automatic multi-instance lung cancer lesion segmentation is underexplored yet crucial for disease assessment. Developed a deep learning-based segmentation pipeline trained on multi-center real-world data, which reached 85% sensitivity at external validation. Thoracic bounding box and false positive reduction techniques improved the pipeline's segmentation performance.

Xi L, Wang J, Liu A, Ni Y, Du J, Huang Q, Li Y, Wen J, Wang H, Zhang S, Zhang Y, Zhang Z, Wang D, Xie W, Gao Q, Cheng Y, Zhai Z, Liu M

pubmed logopapersAug 18 2025
To develop PerAIDE, an AI-driven system for automated analysis of pulmonary perfusion blood volume (PBV) using dual-energy computed tomography pulmonary angiography (DE-CTPA) in patients with chronic pulmonary thromboembolism (CPE). In this prospective observational study, 32 patients with chronic thromboembolic pulmonary disease (CTEPD) and 151 patients with chronic thromboembolic pulmonary hypertension (CTEPH) were enrolled between January 2022 and July 2024. PerAIDE was developed to automatically quantify three distinct perfusion patterns-normal, reduced, and defective-on DE-CTPA images. Two radiologists independently assessed PBV scores. Follow-up imaging was conducted 3 months after balloon pulmonary angioplasty (BPA). PerAIDE demonstrated high agreement with the radiologists (intraclass correlation coefficient = 0.778) and reduced analysis time significantly (31 ± 3 s vs. 15 ± 4 min, p < 0.001). CTEPH patients had greater perfusion defects than CTEPD (0.35 vs. 0.29, p < 0.001), while reduced perfusion was more prevalent in CTEPD (0.36 vs. 0.30, p < 0.001). Perfusion defects correlated positively with pulmonary vascular resistance (ρ = 0.534) and mean pulmonary artery pressure (ρ = 0.482), and negatively with oxygenation index (ρ = -0.441). PerAIDE effectively differentiated CTEPH from CTEPD (AUC = 0.809, 95% CI: 0.745-0.863). At the 3-month post-BPA, a significant reduction in perfusion defects was observed (0.36 vs. 0.33, p < 0.01). CTEPD and CTEPH exhibit distinct perfusion phenotypes on DE-CTPA. PerAIDE reliably quantifies perfusion abnormalities and correlates strongly with clinical and hemodynamic markers of CPE severity. ClinicalTrials.gov, NCT06526468. Registered 28 August 2024- Retrospectively registered, https://clinicaltrials.gov/study/NCT06526468?cond=NCT06526468&rank=1 . PerAIDE is a dual-energy computed tomography pulmonary angiography (DE-CTPA) AI-driven system that rapidly and accurately assesses perfusion blood volume in patients with chronic pulmonary thromboembolism, effectively distinguishing between CTEPD and CTEPH phenotypes and correlating with disease severity and therapeutic response. Right heart catheterization for definitive diagnosis of chronic pulmonary thromboembolism (CPE) is invasive. PerAIDE-based perfusion defects correlated with disease severity to aid CPE-treatment assessment. CTEPH demonstrates severe perfusion defects, while CTEPD displays predominantly reduced perfusion. PerAIDE employs a U-Net-based adaptive threshold method, which achieves alignment with and faster processing relative to manual evaluation.

Piras V, Bonatti AF, De Maria C, Cignoni P, Banterle F

pubmed logopapersAug 18 2025
Breast cancer is the leading cause of cancer death in women worldwide, emphasizing the need for prevention and early detection. Mammography screening plays a crucial role in secondary prevention, but large datasets of referred mammograms from hospital databases are hard to access due to privacy concerns, and publicly available datasets are often unreliable and unbalanced. We propose a novel workflow using a statistical generative model based on generative adversarial networks to generate high-resolution synthetic mammograms. Utilizing a unique 2D parametric model of the compressed breast in craniocaudal projection and image-to-image translation techniques, our approach allows full and precise control over breast features and the generation of both normal and tumor cases. Quality assessment was conducted through visual analysis, and statistical analysis using the first five statistical moments. Additionally a questionnaire was administered to 45 medical experts (radiologists and radiology residents). The results showed that the features of the real mammograms were accurately replicated in the synthetic ones, the image statistics overall correspond reasonably well, and the two groups of images were statistically indistinguishable in almost all cases according to the experts. The proposed workflow generates realistic synthetic mammograms with fine-tuned features. Synthetic mammograms are powerful tools that can create new or balance existing datasets, allowing for the training of machine learning and deep learning algorithms. These algorithms can then assist radiologists in tasks like classification and segmentation, improving diagnostic performance. The code and dataset are available at: https://github.com/cnr-isti-vclab/CC-Mammograms-Generation_GUI.

Chen K, Zhang W, Deng Z, Zhou Y, Zhao J

pubmed logopapersAug 18 2025
Obtaining multiple CT scans from the same patient is required in many clinical scenarios, such as lung nodule screening and image-guided radiation therapy. Repeated scans would expose patients to higher radiation dose and increase the risk of cancer. In this study, we aim to achieve ultra-low-dose imaging for subsequent scans by collecting extremely undersampled sinogram via regional few-view scanning, and preserve image quality utilizing the preceding fullsampled scan as prior. To fully exploit prior information, we propose a two-stage framework consisting of diffusion model-based sinogram restoration and deep learning-based unrolled iterative reconstruction. Specifically, the undersampled sinogram is first restored by a conditional diffusion model with sinogram-domain prior guidance. Then, we formulate the undersampled data reconstruction problem as an optimization problem combining fidelity terms for both undersampled and restored data, along with a regularization term based on image-domain prior. Next, we propose Prior-aided Alternate Iterative NeTwork (PAINT) to solve the optimization problem. PAINT alternately updates the undersampled or restored data fidelity term, and unrolls the iterations to integrate neural network-based prior regularization. In the case of 112 mm field of view in simulated data experiments, our proposed framework achieved superior performance in terms of CT value accuracy and image details preservation. Clinical data experiments also demonstrated that our proposed framework outperformed the comparison methods in artifact reduction and structure recovery.

Orsmaa L, Saukkoriipi M, Kangas J, Rasouli N, Järnstedt J, Mehtonen H, Sahlsten J, Jaskari J, Kaski K, Raisamo R

pubmed logopapersAug 18 2025
Artificial intelligence (AI) achieves high-quality annotations of radiological images, yet often lacks the robustness required in clinical practice. Interactive annotation starts with an AI-generated delineation, allowing radiologists to refine it with feedback, potentially improving precision and reliability. These techniques have been explored in two-dimensional desktop environments, but are not validated by radiologists or integrated with immersive visualization technologies. We used a Virtual Reality (VR) system to determine whether (1) the annotation quality improves when radiologists can edit the AI annotation and (2) whether the extra work done by editing is worthwhile. We evaluated the clinical feasibility of an interactive VR approach to annotate mandibular and mental foramina on segmented 3D mandibular models. Three experienced dentomaxillofacial radiologists reviewed AI-generated annotations and, when needed, refined them at the voxel level in 3D space through click-based interactions until clinical standards were met. Our results indicate that integrating expert feedback within an immersive VR environment enhances annotation accuracy, improves clinical usability, and offers valuable insights for developing medical image analysis systems incorporating radiologist input. This study is the first to compare the quality of original and interactive AI annotation and to use radiologists' opinions as the measure. More research is needed for generalization.
Page 253 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.