Sort by:
Page 86 of 1341332 results

Advances and Integrations of Computer-Assisted Planning, Artificial Intelligence, and Predictive Modeling Tools for Laser Interstitial Thermal Therapy in Neurosurgical Oncology.

Warman A, Moorthy D, Gensler R, Horowtiz MA, Ellis J, Tomasovic L, Srinivasan E, Ahmed K, Azad TD, Anderson WS, Rincon-Torroella J, Bettegowda C

pubmed logopapersJun 24 2025
Laser interstitial thermal therapy (LiTT) has emerged as a minimally invasive, MRI-guided treatment of brain tumors that are otherwise considered inoperable because of their location or the patient's poor surgical candidacy. By directing thermal energy at neoplastic lesions while minimizing damage to surrounding healthy tissue, LiTT offers promising therapeutic outcomes for both newly diagnosed and recurrent tumors. However, challenges such as postprocedural edema, unpredictable heat diffusion near blood vessels and ventricles in real time underscore the need for improved planning and monitoring. Incorporating artificial intelligence (AI) presents a viable solution to many of these obstacles. AI has already demonstrated effectiveness in optimizing surgical trajectories, predicting seizure-free outcomes in epilepsy cases, and generating heat distribution maps to guide real-time ablation. This technology could be similarly deployed in neurosurgical oncology to identify patients most likely to benefit from LiTT, refine trajectory planning, and predict tissue-specific heat responses. Despite promising initial studies, further research is needed to establish the robust data sets and clinical trials necessary to develop and validate AI-driven LiTT protocols. Such advancements have the potential to bolster LiTT's efficacy, minimize complications, and ultimately transform the neurosurgical management of primary and metastatic brain tumors.

Prompt learning with bounding box constraints for medical image segmentation.

Gaillochet M, Noori M, Dastani S, Desrosiers C, Lombaert H

pubmed logopapersJun 24 2025
Pixel-wise annotations are notoriously labourious and costly to obtain in the medical domain. To mitigate this burden, weakly supervised approaches based on bounding box annotations-much easier to acquire-offer a practical alternative. Vision foundation models have recently shown noteworthy segmentation performance when provided with prompts such as points or bounding boxes. Prompt learning exploits these models by adapting them to downstream tasks and automating segmentation, thereby reducing user intervention. However, existing prompt learning approaches depend on fully annotated segmentation masks. This paper proposes a novel framework that combines the representational power of foundation models with the annotation efficiency of weakly supervised segmentation. More specifically, our approach automates prompt generation for foundation models using only bounding box annotations. Our proposed optimization scheme integrates multiple constraints derived from box annotations with pseudo-labels generated by the prompted foundation model. Extensive experiments across multi-modal datasets reveal that our weakly supervised method achieves an average Dice score of 84.90% in a limited data setting, outperforming existing fully-supervised and weakly-supervised approaches. The code will be available upon acceptance.

Refining cardiac segmentation from MRI volumes with CT labels for fine anatomy of the ascending aorta.

Oda H, Wakamori M, Akita T

pubmed logopapersJun 24 2025
Magnetic resonance imaging (MRI) is time-consuming, posing challenges in capturing clear images of moving organs, such as cardiac structures, including complex structures such as the Valsalva sinus. This study evaluates a computed tomography (CT)-guided refinement approach for cardiac segmentation from MRI volumes, focused on preserving the detailed shape of the Valsalva sinus. Owing to the low spatial contrast around the Valsalva sinus in MRI, labels from separate computed tomography (CT) volumes are used to refine the segmentation. Deep learning techniques are employed to obtain initial segmentation from MRI volumes, followed by the detection of the ascending aorta's proximal point. This detected proximal point is then used to select the most similar label from CT volumes of other patients. Non-rigid registration is further applied to refine the segmentation. Experiments conducted on 20 MRI volumes with labels from 20 CT volumes exhibited a slight decrease in quantitative segmentation accuracy. The CT-guided method demonstrated the precision (0.908), recall (0.746), and Dice score (0.804) for the ascending aorta compared with those obtained by nnU-Net alone (0.903, 0.770, and 0.816, respectively). Although some outputs showed bulge-like structures near the Valsalva sinus, an improvement in quantitative segmentation accuracy could not be validated.

Ultrasound Displacement Tracking Techniques for Post-Stroke Myofascial Shear Strain Quantification.

Ashikuzzaman M, Huang J, Bonwit S, Etemadimanesh A, Ghasemi A, Debs P, Nickl R, Enslein J, Fayad LM, Raghavan P, Bell MAL

pubmed logopapersJun 24 2025
Ultrasound shear strain is a potential biomarker of myofascial dysfunction. However, the quality of estimated shear strains can be impacted by differences in ultrasound displacement tracking techniques, potentially altering clinical conclusions surrounding myofascial pain. This work assesses the reliability of four displacement estimation algorithms under a novel clinical hypothesis that the shear strain between muscles on a stroke-affected (paretic) shoulder with myofascial pain is lower than that on the non-paretic side of the same patient. After initial validation with simulations, four approaches were evaluated with in vivo data acquired from ten research participants with myofascial post-stroke shoulder pain: (1) Search is a common window-based method that determines displacements by searching for maximum normalized cross-correlations within windowed data, whereas (2) OVERWIND-Search, (3) SOUL-Search, and (4) $L1$-SOUL-Search fine-tune the Search initial estimates by optimizing cost functions comprising data and regularization terms, utilizing $L1$-norm-based first-order regularization, $L2$-norm-based first- and second-order regularization, and $L1$-norm-based first- and second-order regularization, respectively. SOUL-Search and $L1$-SOUL-Search most accurately and reliably estimate shear strain relative to our clinical hypothesis, when validated with visual inspection of ultrasound cine loops and quantitative T1$\rho$ magnetic resonance imaging. In addition, $L1$-SOUL-Search produced the most reliable displacement tracking performance by generating lateral displacement images with smooth displacement gradients (measured as the mean and variance of displacement derivatives) and sharp edges (which enables distinction of shoulder muscle layers). Among the four investigated methods, $L1$-SOUL-Search emerged as the most suitable option to investigate myofascial pain and dysfunction, despite the drawback of slow runtimes, which can potentially be resolved with a deep learning solution. This work advances musculoskeletal health, ultrasound shear strain imaging, and related applications by establishing the foundation required to develop reliable image-based biomarkers for accurate diagnoses and treatments.

Systematic Review of Pituitary Gland and Pituitary Adenoma Automatic Segmentation Techniques in Magnetic Resonance Imaging

Mubaraq Yakubu, Navodini Wijethilake, Jonathan Shapey, Andrew King, Alexander Hammers

arxiv logopreprintJun 24 2025
Purpose: Accurate segmentation of both the pituitary gland and adenomas from magnetic resonance imaging (MRI) is essential for diagnosis and treatment of pituitary adenomas. This systematic review evaluates automatic segmentation methods for improving the accuracy and efficiency of MRI-based segmentation of pituitary adenomas and the gland itself. Methods: We reviewed 34 studies that employed automatic and semi-automatic segmentation methods. We extracted and synthesized data on segmentation techniques and performance metrics (such as Dice overlap scores). Results: The majority of reviewed studies utilized deep learning approaches, with U-Net-based models being the most prevalent. Automatic methods yielded Dice scores of 0.19--89.00\% for pituitary gland and 4.60--96.41\% for adenoma segmentation. Semi-automatic methods reported 80.00--92.10\% for pituitary gland and 75.90--88.36\% for adenoma segmentation. Conclusion: Most studies did not report important metrics such as MR field strength, age and adenoma size. Automated segmentation techniques such as U-Net-based models show promise, especially for adenoma segmentation, but further improvements are needed to achieve consistently good performance in small structures like the normal pituitary gland. Continued innovation and larger, diverse datasets are likely critical to enhancing clinical applicability.

Development and validation of a SOTA-based system for biliopancreatic segmentation and station recognition system in EUS.

Zhang J, Zhang J, Chen H, Tian F, Zhang Y, Zhou Y, Jiang Z

pubmed logopapersJun 23 2025
Endoscopic ultrasound (EUS) is a vital tool for diagnosing biliopancreatic disease, offering detailed imaging to identify key abnormalities. Its interpretation demands expertise, which limits its accessibility for less trained practitioners. Thus, the creation of tools or systems to assist in interpreting EUS images is crucial for improving diagnostic accuracy and efficiency. To develop an AI-assisted EUS system for accurate pancreatic and biliopancreatic duct segmentation, and evaluate its impact on endoscopists' ability to identify biliary-pancreatic diseases during segmentation and anatomical localization. The EUS-AI system was designed to perform station positioning and anatomical structure segmentation. A total of 45,737 EUS images from 1852 patients were used for model training. Among them, 2881 images were for internal testing, and 2747 images from 208 patients were for external validation. Additionally, 340 images formed a man-machine competition test set. During the research process, various newer state-of-the-art (SOTA) deep learning algorithms were also compared. In classification, in the station recognition task, compared to the ResNet-50 and YOLOv8-CLS algorithms, the Mean Teacher algorithm achieved the highest accuracy, with an average of 95.60% (92.07%-99.12%) in the internal test set and 92.72% (88.30%-97.15%) in the external test set. For segmentation, compared to the UNet ++ and YOLOv8 algorithms, the U-Net v2 algorithm was optimal. Ultimately, the EUS-AI system was constructed using the optimal models from two tasks, and a man-machine competition experiment was conducted. The results demonstrated that the performance of the EUS-AI system significantly outperformed that of mid-level endoscopists, both in terms of position recognition (p < 0.001) and pancreas and biliopancreatic duct segmentation tasks (p < 0.001, p = 0.004). The EUS-AI system is expected to significantly shorten the learning curve for the pancreatic EUS examination and enhance procedural standardization.

Intelligent Virtual Dental Implant Placement via 3D Segmentation Strategy.

Cai G, Wen B, Gong Z, Lin Y, Liu H, Zeng P, Shi M, Wang R, Chen Z

pubmed logopapersJun 23 2025
Virtual dental implant placement in cone-beam computed tomography (CBCT) is a prerequisite for digital implant surgery, carrying clinical significance. However, manual placement is a complex process that should meet clinical essential requirements of restoration orientation, bone adaptation, and anatomical safety. This complexity presents challenges in balancing multiple considerations comprehensively and automating the entire workflow efficiently. This study aims to achieve intelligent virtual dental implant placement through a 3-dimensional (3D) segmentation strategy. Focusing on the missing mandibular first molars, we developed a segmentation module based on nnU-Net to generate the virtual implant from the edentulous region of CBCT and employed an approximation module for mathematical optimization. The generated virtual implant was integrated with the original CBCT to meet clinical requirements. A total of 190 CBCT scans from 4 centers were collected for model development and testing. This tool segmented the virtual implant with a surface Dice coefficient (sDice) of 0.903 and 0.884 on internal and external testing sets. Compared to the ground truth, the average deviations of the implant platform, implant apex, and angle were 0.850 ± 0.554 mm, 1.442 ± 0.539 mm, and 4.927 ± 3.804° on the internal testing set and 0.822 ± 0.353 mm, 1.467 ± 0.560 mm, and 5.517 ± 2.850° on the external testing set, respectively. The 3D segmentation-based artificial intelligence tool demonstrated good performance in predicting both the dimension and position of the virtual implants, showing significant clinical application potential in implant planning.

SafeClick: Error-Tolerant Interactive Segmentation of Any Medical Volumes via Hierarchical Expert Consensus

Yifan Gao, Jiaxi Sheng, Wenbin Wu, Haoyue Li, Yaoxian Dong, Chaoyang Ge, Feng Yuan, Xin Gao

arxiv logopreprintJun 23 2025
Foundation models for volumetric medical image segmentation have emerged as powerful tools in clinical workflows, enabling radiologists to delineate regions of interest through intuitive clicks. While these models demonstrate promising capabilities in segmenting previously unseen anatomical structures, their performance is strongly influenced by prompt quality. In clinical settings, radiologists often provide suboptimal prompts, which affects segmentation reliability and accuracy. To address this limitation, we present SafeClick, an error-tolerant interactive segmentation approach for medical volumes based on hierarchical expert consensus. SafeClick operates as a plug-and-play module compatible with foundation models including SAM 2 and MedSAM 2. The framework consists of two key components: a collaborative expert layer (CEL) that generates diverse feature representations through specialized transformer modules, and a consensus reasoning layer (CRL) that performs cross-referencing and adaptive integration of these features. This architecture transforms the segmentation process from a prompt-dependent operation to a robust framework capable of producing accurate results despite imperfect user inputs. Extensive experiments across 15 public datasets demonstrate that our plug-and-play approach consistently improves the performance of base foundation models, with particularly significant gains when working with imperfect prompts. The source code is available at https://github.com/yifangao112/SafeClick.

MedSeg-R: Medical Image Segmentation with Clinical Reasoning

Hao Shao, Qibin Hou

arxiv logopreprintJun 23 2025
Medical image segmentation is challenging due to overlapping anatomies with ambiguous boundaries and a severe imbalance between the foreground and background classes, which particularly affects the delineation of small lesions. Existing methods, including encoder-decoder networks and prompt-driven variants of the Segment Anything Model (SAM), rely heavily on local cues or user prompts and lack integrated semantic priors, thus failing to generalize well to low-contrast or overlapping targets. To address these issues, we propose MedSeg-R, a lightweight, dual-stage framework inspired by inspired by clinical reasoning. Its cognitive stage interprets medical report into structured semantic priors (location, texture, shape), which are fused via transformer block. In the perceptual stage, these priors modulate the SAM backbone: spatial attention highlights likely lesion regions, dynamic convolution adapts feature filters to expected textures, and deformable sampling refines spatial support. By embedding this fine-grained guidance early, MedSeg-R disentangles inter-class confusion and amplifies minority-class cues, greatly improving sensitivity to small lesions. In challenging benchmarks, MedSeg-R produces large Dice improvements in overlapping and ambiguous structures, demonstrating plug-and-play compatibility with SAM-based systems.

Towards a comprehensive characterization of arteries and veins in retinal imaging.

Andreini P, Bonechi S

pubmed logopapersJun 23 2025
Retinal fundus imaging is crucial for diagnosing and monitoring eye diseases, which are often linked to systemic health conditions such as diabetes and hypertension. Current deep learning techniques often narrowly focus on segmenting retinal blood vessels, lacking a more comprehensive analysis and characterization of the retinal vascular system. This study fills this gap by proposing a novel, integrated approach that leverages multiple stages to accurately determine vessel paths and extract informative features from them. The segmentation of veins and arteries, achieved through a deep semantic segmentation network, is used by a newly designed algorithm to reconstruct individual vessel paths. The reconstruction process begins at the optic disc, identified by a localization network, and uses a recurrent neural network to predict the vessel paths at various junctions. The different stages of the proposed approach are validated both qualitatively and quantitatively, demonstrating robust performance. The proposed approach enables the extraction of critical features at the individual vessel level, such as vessel tortuosity and diameter. This work lays the foundation for a comprehensive retinal image evaluation, going beyond isolated tasks like vessel segmentation, with significant potential for clinical diagnosis.
Page 86 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.