Sort by:
Page 10 of 1331322 results

TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks

Itzik Waizman, Yakov Gusakov, Itay Benou, Tammy Riklin Raviv

arxiv logopreprintSep 19 2025
White matter tractography is an advanced neuroimaging technique that reconstructs the 3D white matter pathways of the brain from diffusion MRI data. It can be framed as a pathfinding problem aiming to infer neural fiber trajectories from noisy and ambiguous measurements, facing challenges such as crossing, merging, and fanning white-matter configurations. In this paper, we propose a novel tractography method that leverages Transformers to model the sequential nature of white matter streamlines, enabling the prediction of fiber directions by integrating both the trajectory context and current diffusion MRI measurements. To incorporate spatial information, we utilize CNNs that extract microstructural features from local neighborhoods around each voxel. By combining these complementary sources of information, our approach improves the precision and completeness of neural pathway mapping compared to traditional tractography models. We evaluate our method with the Tractometer toolkit, achieving competitive performance against state-of-the-art approaches, and present qualitative results on the TractoInferno dataset, demonstrating strong generalization to real-world data.

Hybrid-MedNet: a hybrid CNN-transformer network with multi-dimensional feature fusion for medical image segmentation.

Memon Y, Zeng F

pubmed logopapersSep 19 2025
Twin-to-Twin Transfusion Syndrome (TTTS) is a complex prenatal condition in which monochorionic twins experience an imbalance in blood flow due to abnormal vascular connections in the shared placenta. Fetoscopic Laser Photocoagulation (FLP) is the first-line treatment for TTTS, aimed at coagulating these abnormal connections. However, the procedure is complicated by a limited field of view, occlusions, poor-quality endoscopic images, and distortions caused by artifacts. To optimize the visualization of placental vessels during surgical procedures, we propose Hybrid-MedNet, a novel hybrid CNN-transformer network that incorporates multi-dimensional deep feature learning techniques. The network introduces a BiPath Tokenization module that enhances vessel boundary detection by capturing both channel dependencies and spatial features through parallel attention mechanisms. A Context-Aware Transformer block addresses the weak inductive bias problem in traditional transformers while preserving spatial relationships crucial for accurate vessel identification in distorted fetoscopic images. Furthermore, we develop a Multi-Scale Trifusion Module that integrates multi-dimensional features to capture rich vascular representations from the encoder and facilitate precise vessel information transfer to the decoder for improved segmentation accuracy. Experimental results show that our approach achieves a Dice score of 95.40% on fetoscopic images, outperforming 10 state-of-the-art segmentation methods. The consistent superior performance across four segmentation tasks and ten distinct datasets confirms the robustness and effectiveness of our method for diverse and complex medical imaging applications.

AI-driven innovations for dental implant treatment planning: A systematic review.

Zaww K, Abbas H, Vanegas Sáenz JR, Hong G

pubmed logopapersSep 19 2025
This systematic review evaluates the effectiveness of artificial intelligence (AI) models in dental implant treatment planning, focusing on: 1) identification, detection, and segmentation of anatomical structures; 2) technical assistance during treatment planning; and 3) additional relevant applications. A literature search of PubMed/MEDLINE, Scopus, and Web of Science was conducted for studies published in English until July 31, 2024. The included studies explored AI applications in implant treatment planning, excluding expert opinions, guidelines, and protocols. Three reviewers independently assessed study quality using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies, resolving disagreements by consensus. Of the 28 included studies, four were high, four were medium, and 20 were low quality according to the JBI scale. Eighteen studies on anatomical segmentation have demonstrated AI models with accuracy rates ranging from 66.4% to 99.1%. Eight studies examined AI's role in technical assistance for surgical planning, demonstrating its potential in predicting jawbone mineral density, optimizing drilling protocols, and classifying plans for maxillary sinus augmentation. One study indicated a learning curve for AI in implant planning, recommending at least 50 images for over 70% predictive accuracy. Another study reported 83% accuracy in localizing stent markers for implant sites, suggesting additional imaging planes to address a 17% miss rate and 2.8% false positives. AI models exhibit potential for automating dental implant planning with high accuracy in anatomical segmentation and insightful technical assistance. However, further well-designed studies with standardized evaluation parameters are required for pragmatic integration into clinical settings.

Transplant-Ready? Evaluating AI Lung Segmentation Models in Candidates with Severe Lung Disease

Jisoo Lee, Michael R. Harowicz, Yuwen Chen, Hanxue Gu, Isaac S. Alderete, Lin Li, Maciej A. Mazurowski, Matthew G. Hartwig

arxiv logopreprintSep 18 2025
This study evaluates publicly available deep-learning based lung segmentation models in transplant-eligible patients to determine their performance across disease severity levels, pathology categories, and lung sides, and to identify limitations impacting their use in preoperative planning in lung transplantation. This retrospective study included 32 patients who underwent chest CT scans at Duke University Health System between 2017 and 2019 (total of 3,645 2D axial slices). Patients with standard axial CT scans were selected based on the presence of two or more lung pathologies of varying severity. Lung segmentation was performed using three previously developed deep learning models: Unet-R231, TotalSegmentator, MedSAM. Performance was assessed using quantitative metrics (volumetric similarity, Dice similarity coefficient, Hausdorff distance) and a qualitative measure (four-point clinical acceptability scale). Unet-R231 consistently outperformed TotalSegmentator and MedSAM in general, for different severity levels, and pathology categories (p<0.05). All models showed significant performance declines from mild to moderate-to-severe cases, particularly in volumetric similarity (p<0.05), without significant differences among lung sides or pathology types. Unet-R231 provided the most accurate automated lung segmentation among evaluated models with TotalSegmentator being a close second, though their performance declined significantly in moderate-to-severe cases, emphasizing the need for specialized model fine-tuning in severe pathology contexts.

Fully Automated Image-Based Multiplexing of Serial PET/CT Imaging for Facilitating Comprehensive Disease Phenotyping.

Shiyam Sundar LK, Gutschmayer S, Pires M, Ferrara D, Nguyen T, Abdelhafez YG, Spencer B, Cherry SR, Badawi RD, Kersting D, Fendler WP, Kim MS, Lassen ML, Hasbak P, Schmidt F, Linder P, Mu X, Jiang Z, Abenavoli EM, Sciagrà R, Frille A, Wirtz H, Hesse S, Sabri O, Bailey D, Chan D, Callahan J, Hicks RJ, Beyer T

pubmed logopapersSep 18 2025
Combined PET/CT imaging provides critical insights into both anatomic and molecular processes, yet traditional single-tracer approaches limit multidimensional disease phenotyping; to address this, we developed the PET Unified Multitracer Alignment (PUMA) framework-an open-source, postprocessing tool that multiplexes serial PET/CT scans for comprehensive voxelwise tissue characterization. <b>Methods:</b> PUMA utilizes artificial intelligence-based CT segmentation from multiorgan objective segmentation to generate multilabel maps of 24 body regions, guiding a 2-step registration: affine alignment followed by symmetric diffeomorphic registration. Tracer images are then normalized and assigned to red-green-blue channels for simultaneous visualization of up to 3 tracers. The framework was evaluated on longitudinal PET/CT scans from 114 subjects across multiple centers and vendors. Rigid, affine, and deformable registration methods were compared for optimal coregistration. Performance was assessed using the Dice similarity coefficient for organ alignment and absolute percentage differences in organ intensity and tumor SUV<sub>mean</sub> <b>Results:</b> Deformable registration consistently achieved superior alignment, with Dice similarity coefficient values exceeding 0.90 in 60% of organs while maintaining organ intensity differences below 3%; similarly, SUV<sub>mean</sub> differences for tumors were minimal at 1.6% ± 0.9%, confirming that PUMA preserves quantitative PET data while enabling robust spatial multiplexing. <b>Conclusion:</b> PUMA provides a vendor-independent solution for postacquisition multiplexing of serial PET/CT images, integrating complementary tracer data voxelwise into a composite image without modifying clinical protocols. This enhances multidimensional disease phenotyping and supports better diagnostic and therapeutic decisions using serial multitracer PET/CT imaging.

No Modality Left Behind: Adapting to Missing Modalities via Knowledge Distillation for Brain Tumor Segmentation

Shenghao Zhu, Yifei Chen, Weihong Chen, Shuo Jiang, Guanyu Zhou, Yuanhan Wang, Feiwei Qin, Changmiao Wang, Qiyuan Tian

arxiv logopreprintSep 18 2025
Accurate brain tumor segmentation is essential for preoperative evaluation and personalized treatment. Multi-modal MRI is widely used due to its ability to capture complementary tumor features across different sequences. However, in clinical practice, missing modalities are common, limiting the robustness and generalizability of existing deep learning methods that rely on complete inputs, especially under non-dominant modality combinations. To address this, we propose AdaMM, a multi-modal brain tumor segmentation framework tailored for missing-modality scenarios, centered on knowledge distillation and composed of three synergistic modules. The Graph-guided Adaptive Refinement Module explicitly models semantic associations between generalizable and modality-specific features, enhancing adaptability to modality absence. The Bi-Bottleneck Distillation Module transfers structural and textural knowledge from teacher to student models via global style matching and adversarial feature alignment. The Lesion-Presence-Guided Reliability Module predicts prior probabilities of lesion types through an auxiliary classification task, effectively suppressing false positives under incomplete inputs. Extensive experiments on the BraTS 2018 and 2024 datasets demonstrate that AdaMM consistently outperforms existing methods, exhibiting superior segmentation accuracy and robustness, particularly in single-modality and weak-modality configurations. In addition, we conduct a systematic evaluation of six categories of missing-modality strategies, confirming the superiority of knowledge distillation and offering practical guidance for method selection and future research. Our source code is available at https://github.com/Quanato607/AdaMM.

Visionerves: Automatic and Reproducible Hybrid AI for Peripheral Nervous System Recognition Applied to Endometriosis Cases

Giammarco La Barbera, Enzo Bonnot, Thomas Isla, Juan Pablo de la Plata, Joy-Rose Dunoyer de Segonzac, Jennifer Attali, Cécile Lozach, Alexandre Bellucci, Louis Marcellin, Laure Fournier, Sabine Sarnacki, Pietro Gori, Isabelle Bloch

arxiv logopreprintSep 18 2025
Endometriosis often leads to chronic pelvic pain and possible nerve involvement, yet imaging the peripheral nerves remains a challenge. We introduce Visionerves, a novel hybrid AI framework for peripheral nervous system recognition from multi-gradient DWI and morphological MRI data. Unlike conventional tractography, Visionerves encodes anatomical knowledge through fuzzy spatial relationships, removing the need for selection of manual ROIs. The pipeline comprises two phases: (A) automatic segmentation of anatomical structures using a deep learning model, and (B) tractography and nerve recognition by symbolic spatial reasoning. Applied to the lumbosacral plexus in 10 women with (confirmed or suspected) endometriosis, Visionerves demonstrated substantial improvements over standard tractography, with Dice score improvements of up to 25% and spatial errors reduced to less than 5 mm. This automatic and reproducible approach enables detailed nerve analysis and paves the way for non-invasive diagnosis of endometriosis-related neuropathy, as well as other conditions with nerve involvement.

Bridging the quality gap: Robust colon wall segmentation in noisy transabdominal ultrasound.

Gago L, González MAF, Engelmann J, Remeseiro B, Igual L

pubmed logopapersSep 18 2025
Colon wall segmentation in transabdominal ultrasound is challenging due to variations in image quality, speckle noise, and ambiguous boundaries. Existing methods struggle with low-quality images due to their inability to adapt to varying noise levels, poor boundary definition, and reduced contrast in ultrasound imaging, resulting in inconsistent segmentation performance. We present a novel quality-aware segmentation framework that simultaneously predicts image quality and adapts the segmentation process accordingly. Our approach uses a U-Net architecture with a ConvNeXt encoder backbone, enhanced with a parallel quality prediction branch that serves as a regularization mechanism. Our model learns robust features by explicitly modeling image quality during training. We evaluate our method on the C-TRUS dataset and demonstrate superior performance compared to state-of-the-art approaches, particularly on challenging low-quality images. Our method achieves Dice scores of 0.7780, 0.7025, and 0.5970 for high, medium, and low-quality images, respectively. The proposed quality-aware segmentation framework represents a significant step toward clinically viable automated colon wall segmentation systems.

Integrating artificial intelligence with Gamma Knife radiosurgery in treating meningiomas and schwannomas: a review.

Alhosanie TN, Hammo B, Klaib AF, Alshudifat A

pubmed logopapersSep 18 2025
Meningiomas and schwannomas are benign tumors that affect the central nervous system, comprising up to one-third of intracranial neoplasms. Gamma Knife radiosurgery (GKRS), or stereotactic radiosurgery (SRS), is a form of radiation therapy. Although referred to as "surgery," GKRS does not involve incisions. The GK medical device effectively utilizes highly focused gamma rays to treat lesions or tumors, primarily in the brain. In radiation oncology, machine learning (ML) has been used in various aspects, including outcome prediction, quality control, treatment planning, and image segmentation. This review will showcase the advantages of integrating artificial intelligence with Gamma Knife technology in treating schwannomas and meningiomas.This review adheres to PRISMA guidelines. We searched the PubMed, Scopus, and IEEE databases to identify studies published between 2021 and March 2025 that met our inclusion and exclusion criteria. The focus was on AI algorithms applied to patients with vestibular schwannoma and meningioma treated with GKRS. Two reviewers participated in the data extraction and quality assessment process.A total of nine studies were reviewed in this analysis. One distinguished deep learning (DL) model is a dual-pathway convolutional neural network (CNN) that integrates T1-weighted (T1W) and T2-weighted (T2W) MRI scans. This model was tested on 861 patients who underwent GKRS, achieving a Dice Similarity Coefficient (DSC) of 0.90. ML-based radiomics models have also demonstrated that certain radiomic features can predict the response of vestibular schwannomas and meningiomas to radiosurgery. Among these, the neural network model exhibited the best performance. AI models were also employed to predict complications following GKRS, such as peritumoral edema. A Random Survival Forest (RSF) model was developed using clinical, semantic, and radiomics variables, achieving a C-index score of 0.861 and 0.780. This model enables the classification of patients into high-risk and low-risk categories for developing post-GKRS edema.AI and ML models show great potential in tumor segmentation, volumetric assessment, and predicting treatment outcomes for vestibular schwannomas and meningiomas treated with GKRS. However, their successful clinical implementation relies on overcoming challenges related to external validation, standardization, and computational demands. Future research should focus on large-scale, multi-institutional validation studies, integrating multimodal data, and developing cost-effective strategies for deploying AI technologies.

Deep Learning for Automated Measures of SUV and Molecular Tumor Volume in [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and [<sup>177</sup>Lu]Lu-PSMA-617 Imaging with Global Threshold Regional Consensus Network.

Jackson P, Buteau JP, McIntosh L, Sun Y, Kashyap R, Casanueva S, Ravi Kumar AS, Sandhu S, Azad AA, Alipour R, Saghebi J, Kong G, Jewell K, Eifer M, Bollampally N, Hofman MS

pubmed logopapersSep 18 2025
Metastatic castration-resistant prostate cancer has a high rate of mortality with a limited number of effective treatments after hormone therapy. Radiopharmaceutical therapy with [<sup>177</sup>Lu]Lu-prostate-specific membrane antigen-617 (LuPSMA) is one treatment option; however, response varies and is partly predicted by PSMA expression and metabolic activity, assessed on [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL and [<sup>18</sup>F]FDG PET, respectively. Automated methods to measure these on PET imaging have previously yielded modest accuracy. Refining computational workflows and standardizing approaches may improve patient selection and prognostication for LuPSMA therapy. <b>Methods:</b> PET/CT and quantitative SPECT/CT images from an institutional cohort of patients staged for LuPSMA therapy were annotated for total disease burden. In total, 676 [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET, 390 [<sup>18</sup>F]FDG PET, and 477 LuPSMA SPECT images were used for development of automated workflow and tested on 56 cases with externally referred PET/CT staging. A segmentation framework, the Global Threshold Regional Consensus Network, was developed based on nnU-Net, with processing refinements to improve boundary definition and overall label accuracy. <b>Results:</b> Using the model to contour disease extent, the mean volumetric Dice similarity coefficient for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET was 0.94, for [<sup>18</sup>F]FDG PET was 0.84, and for LuPSMA SPECT was 0.97. On external test cases, Dice accuracy was 0.95 and 0.84 on PSMA and FDG PET, respectively. The refined models yielded consistent improvements compared with nnU-Net, with an increase of 3%-5% in Dice accuracy and 10%-17% in surface agreement. Quantitative biomarkers were compared with a human-defined ground truth using the Pearson coefficient, with scores for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and LuPSMA, respectively, of 0.98, 0.94, and 0.99 for disease volume; 0.98, 0.88, and 0.99 for SUV<sub>mean</sub>; 0.96, 0.91, and 0.99 for SUV<sub>max</sub>; and 0.97, 0.96, and 0.99 for volume intensity product. <b>Conclusion:</b> Delineation of disease extent and tracer avidity can be performed with a high degree of accuracy using automated deep learning methods. By incorporating threshold-based postprocessing, the tools can closely match the output of manual workflows. Pretrained models and scripts to adapt to institutional data are provided for open use.
Page 10 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.