Sort by:
Page 2 of 1331322 results

petBrain: a new pipeline for amyloid, Tau tangles and neurodegeneration quantification using PET and MRI.

Coupé P, Mansencal B, Morandat F, Morell-Ortega S, Villain N, Manjón JV, Planche V

pubmed logopapersSep 30 2025
Quantification of amyloid plaques (A), neurofibrillary tangles (T<sub>2</sub>), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, tracer variability handling, and multimodal integration. We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T<sub>2</sub>, and N biomarkers. It is implemented in a web-based format, requiring no local computational infrastructure and software usage knowledge. petBrain provides reliable, rapid quantification with results comparable to existing pipelines for A and T<sub>2</sub>, showing strong concordance with data processed in ADNI databases. The staging and quantification of A/T<sub>2</sub>/N by petBrain demonstrated good agreements with CSF/plasma biomarkers, clinical status and cognitive performance. petBrain represents a powerful open platform for standardized AD biomarker analysis, facilitating clinical research applications.

Automated contouring of gross tumor volume lymph nodes in lung cancer by deep learning.

Huang Y, Yuan X, Xu L, Jian J, Gong C, Zhang Y, Zheng W

pubmed logopapersSep 30 2025
The precise contouring of gross tumor volume lymph nodes (GTVnd) is an essential step in clinical target volume delineation. This study aims to propose and evaluate a deep learning model for segmenting GTVnd specifically in lung cancer, representing one of the pioneering investigations into automated segmentation of GTVnd specifically for lung cancer. Ninety computed tomography (CT) scans of patients with stage Ш-Ⅳ small cell lung cancer (SCLC) were collected, of which 75 patients were assembled into a training dataset and 15 were used in a testing dataset. A new segmentation model was constructed to enable the automatic and accurate delineation of the GTVnd in lung cancer. This model integrates a contextual cue enhancement module and an edge-guided feature enhancement decoder. The contextual cues enhancement module was used to enforce the consistency of the contextual cues encoded in the deepest feature, and the edge-guided feature enhancement decoder was used to obtain edge-aware and edge-preserving segmentation predictions. The model was quantitatively evaluated using the three-dimensional Dice Similarity Coefficient (3D DSC) and the 95th Hausdorff Distance (95HD). Additionally, comparative analysis was conducted between predicted treatment plans derived from auto-contouring GTVnd and established clinical plans. The ECENet achieved a mean 3D DSC of 0.72 ± 0.09 and a 95HD of 6.39 ± 4.59 mm, showing significant improvement compared to UNet, with a DSC of 0.46 ± 0.19 and a 95HD of 12.24 ± 13.36 mm, and nnUNet, with a DSC of 0.52 ± 0.18 and a 95HD of 9.92 ± 6.49 mm. Its performance was intermediate, falling between mid-level physicians, with a DSC of 0.81 ± 0.06, and junior physicians, with a DSC of 0.68 ± 0.10. And the clinical and predicted treatment plans were further compared. The dosimetric analysis demonstrated excellent agreement between predicted and clinical plans, with average relative deviation of < 0.17% for PTV D2/D50/D98, < 3.5% for lung V30/V20/V10/V5/Dmean, and < 6.1% for heart V40/V30/Dmean. Furthermore, the TCP (66.99% ± 0.55 vs. 66.88% ± 0.45) and NTCP (3.13% ± 1.33 vs. 3.25% ± 1.42) analyses revealed strong concordance between predicted and clinical outcomes, confirming the clinical applicability of the proposed method. The proposed model could achieve the automatic delineation of the GTVnd in the thoracic region of lung cancer and showed certain advantages, making it a potential choice for the automatic delineation of the GTVnd in lung cancer, particularly for young radiation oncologists.

Deep transfer learning based feature fusion model with Bonobo optimization algorithm for enhanced brain tumor segmentation and classification through biomedical imaging.

Gurunathan P, Srinivasan PS, S R

pubmed logopapersSep 30 2025
The brain tumour (BT) is an aggressive disease among others, which leads to a very short life expectancy. Therefore, early and prompt treatment is the main stage in enhancing patients' quality of life. Biomedical imaging permits the non-invasive evaluation of diseases, depending upon visual assessments that lead to better medical outcome expectations and therapeutic planning. Numerous image techniques like computed tomography (CT), magnetic resonance imaging (MRI), etc., are employed for evaluating cancer in the brain. The detection, segmentation and extraction of diseased tumour regions from biomedical images are a primary concern, but are tiresome and time-consuming tasks done by clinical specialists, and their outcome depends on their experience only. Therefore, the use of computer-aided technologies is essential to overcoming these limitations. Recently, artificial intelligence (AI) models have been very effective in enhancing performance and improving the method of medical image diagnosis. This paper proposes an Enhanced Brain Tumour Segmentation through Biomedical Imaging and Feature Model Fusion with Bonobo Optimiser (EBTS-BIFMFBO) model. The main intention of the EBTS-BIFMFBO model relies on enhancing the segmentation and classification model of BTs utilizing advanced models. Initially, the EBTS-BIFMFBO technique follows bilateral filter (BF)-based noise elimination and CLAHE-based contrast enhancement. Furthermore, the proposed EBTS-BIFMFBO model involves a segmentation process by the DeepLabV3 + model to identify tumour regions for accurate diagnosis. Moreover, the fusion models such as InceptionResNetV2, MobileNet, and DenseNet201 are employed for the feature extraction. Additionally, the convolutional sparse autoencoder (CSAE) method is implemented for the classification process of BT. Finally, the hyper-parameter selection of CSAE is performed by the bonobo optimizer (BO) method. A vast experiment is conducted to highlight the performance of the EBTS-BIFMFBO approach under the Figshare BT dataset. The comparison results of the EBTS-BIFMFBO approach portrayed a superior accuracy value of 99.16% over existing models.

Automating prostate volume acquisition using abdominal ultrasound scans for prostate-specific antigen density calculations.

Bennett RD, Barrett T, Sushentsev N, Sanmugalingam N, Lee KL, Gnanapragasam VJ, Tse ZTH

pubmed logopapersSep 30 2025
Proposed methods for prostate cancer screening are currently prohibitively expensive (due to the high costs of imaging equipment such as magnetic resonance imaging and traditional ultrasound systems), inadequate in their detection rates, require highly trained specialists, and/or are invasive, resulting in patient discomfort. These limitations make population-wide screening for prostate cancer challenging. Machine learning techniques applied to abdominal ultrasound scanning may help alleviate some of these disadvantages. Abdominal ultrasound scans are comparatively low cost and exhibit minimal patient discomfort, and machine learning can be applied to mitigate against the high operator-dependent variability of ultrasound scanning. In this study, a state-of-the-art machine learning model was compared to an expert radiologist and trainee radiologist registrars of varying experience when estimating prostate volume from abdominal ultrasound images, a crucial step in detecting prostate cancer using prostate-specific antigen density. The machine learning model calculated prostatic volume by marking out dimensions of the prolate ellipsoid formula from two orthogonal images of the prostate acquired with abdominal ultrasound scans (which could be conducted by operators with minimal experience in a primary care setting). While both the algorithm and the registrars showed high correlation with the expert ([Formula: see text]) it was found that the model outperformed the trainees in both accuracy (lowest average volume error of [Formula: see text]) and consistency (lowest IQR of [Formula: see text] and lowest average volume standard deviation of [Formula: see text]). The results are promising for the future development of an automated prostate cancer screening workflow using machine learning and abdominal ultrasound scans.

Attention-enhanced hybrid U-Net for prostate cancer grading and explainability.

Zaheer AN, Farhan M, Min G, Alotaibi FA, Alnfiai MM

pubmed logopapersSep 30 2025
Prostate cancer remains a leading cause of mortality, necessitating precise histopathological segmentation for accurate Gleason Grade assessment. However, existing deep learning-based segmentation models lack contextual awareness and explainability, leading to inconsistent performance across heterogeneous tissue structures. Conventional U-Net architectures and CNN-based approaches struggle with capturing long-range dependencies and fine-grained histopathological patterns, resulting in suboptimal boundary delineation and model generalizability. To address these limitations, we propose a transformer-attention hybrid U-Net (TAH U-Net), integrating hybrid CNN-transformer encoding, attention-guided skip connections, and a multi-stage guided loss mechanism for enhanced segmentation accuracy and model interpretability. The ResNet50-based convolutional layers efficiently capture local spatial features, while Vision Transformer (ViT) blocks model global contextual dependencies, improving segmentation consistency. Attention mechanisms are incorporated into skip connections and decoder pathways, refining feature propagation by suppressing irrelevant tissue noise while enhancing diagnostically significant regions. A novel hierarchical guided loss function optimizes segmentation masks at multiple decoder stages, improving boundary refinement and gradient stability. Additionally, Explainable AI (XAI) techniques such as LIME, Occlusion Sensitivity, and Partial Dependence Analysis (PDP), validate the model's decision-making transparency, ensuring clinical reliability. The experimental evaluation on the SICAPv2 dataset demonstrates state-of-the-art performance, surpassing traditional U-Net architectures with a 4.6% increase in Dice Score, 5.1% gain in IoU, along with notable improvements in Precision (+ 4.2%) and Recall (+ 3.8%). This research significantly advances AI-driven prostate cancer diagnostics by providing an interpretable and highly accurate segmentation framework, enhancing clinical trust in histopathology-based grading within medical imaging and computational pathology.

Dynamic computed tomography assessment of patellofemoral and tibiofemoral kinematics before and after total knee arthroplasty: A pilot study.

Boot MR, van de Groes SAW, Tanck E, Janssen D

pubmed logopapersSep 29 2025
To develop and evaluate the clinical feasibility of a dynamic computed tomography (CT) protocol for assessing patellofemoral (PF) and tibiofemoral (TF) kinematics before and after total knee arthroplasty (TKA), and to quantify postoperative kinematic changes in a pilot study. In this prospective single-centre study, patients with primary osteoarthritis scheduled for cemented TKA underwent dynamic CT scans preoperatively and at 1-year follow-up during active flexion-extension-flexion. Preoperatively, the femur, tibia and patella were segmented using a neural network. Postoperatively, computer-aided design (CAD) implant models were aligned to CT data to determine relative implant-bone orientation. Due to metal artefacts, preoperative patella meshes were manually aligned to postoperative scans by four raters, and averaged for analysis. Anatomical coordinate systems were applied to quantify patellar flexion, tilt, proximal tip rotation, mediolateral translation and femoral condyle anterior-posterior translation. Descriptive statistics were reported, and interoperator agreement for patellar registration was assessed using intraclass correlation coefficients (ICCs). Ten patients (mean age, 65 ± 8 years; 6 men) were analysed across a shared flexion range of 14°-55°. Postoperatively, the patella showed increased flexion (median difference: 0.9°-3.9°), medial proximal tip rotation (median difference: 1.5°-6.0°), lateral tilt (median difference: 2.7°-5.5°), and lateral shift (median difference: -1.5 to -2.8 mm). The medial and lateral femoral condyles translated 2-4 mm anterior-posteriorly during knee flexion. Interoperator agreement for patellar registration ranged from good to excellent across all parameters (ICC = 0.85-1.00). This pilot study demonstrates that dynamic CT enables in vivo assessment of PF and TF kinematics before and after TKA. The protocol quantified postoperative kinematic changes and demonstrated potential as research tool. Further automation is needed to investigate relationships between these kinematic patterns and patient outcomes in larger-scale studies. Level III.

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Geometric, dosimetric and psychometric evaluation of three commercial AI software solutions for OAR auto-segmentation in head and neck radiotherapy.

Podobnik G, Borg C, Debono CJ, Mercieca S, Vrtovec T

pubmed logopapersSep 29 2025
Contouring organs-at-risk (OARs) is a critical yet time-consuming step in head and neck (HaN) radiotherapy planning. Auto-segmentation methods have been widely studied, and commercial solutions are increasingly entering clinical use. However, their adoption warrants a comprehensive, multi-perspective evaluation. The purpose of this study is to compare three commercial artificial intelligence (AI) software solutions (Limbus, MIM and MVision) for HaN OAR auto-segmentation on a cohort of 10 computed tomography images with reference contours obtained from the public HaN-Seg dataset, from both observational (descriptive and empirical) and analytical (geometric, dosimetric and psychometric) perspectives. The observational evaluation included vendor questionnaires on technical specifications and radiographer feedback on usability. The analytical evaluation covered geometric (Dice similarity coefficient, DSC, and 95th percentile Hausdorff distance, HD95), dosimetric (dose constraint compliance, OAR priority-based analysis), and psychometric (5-point Likert scale) assessments. All software solutions covered a broad range of OARs. Overall geometric performance differences were relatively small (Limbus: 69.7% DSC, 5.0 mm HD95; MIM: 69.2% DSC, 5.6 mm HD95; MVision: 66.7% DSC, 5.3 mm HD95), however, statistically significant differences were observed for smaller structures such as the cochleae, optic chiasm, and pituitary and thyroid glands. Differences in dosimetric compliance were overall minor, with the lowest compliance observed for the oral cavity and submandibular glands. In terms of qualitative assessment, radiographers gave the highest average Likert rating to Limbus (3.9), followed by MVision (3.7) and MIM (3.5). With few exceptions, most software solutions produced good-quality AI-generated contours (Likert ratings ≥ 3), yet some editing should still be performed to reach clinical acceptability. Notable discrepancies were seen for the optic chiasm and in cases affected by mouth bites or dental artifacts. Importantly, no clear relationship emerged between geometric, dosimetric, and psychometric metrics, underscoring the need for a multi-perspective evaluation without shortcuts.

Clinical application of deep learning for enhanced multistage caries detection in panoramic radiographs.

Pornprasertsuk-Damrongsri S, Vachmanus S, Papasratorn D, Kitisubkanchana J, Chaikantha S, Arayasantiparb R, Mongkolwat P

pubmed logopapersSep 29 2025
The detection of dental caries is typically overlooked on panoramic radiographs. This study aims to leverage deep learning to identify multistage caries on panoramic radiographs. The panoramic radiographs were confirmed with the gold standard bitewing radiographs to create a reliable ground truth. The dataset of 500 panoramic radiographs with corresponding bitewing confirmations was labelled by an experienced and calibrated radiologist for 1,792 caries from 14,997 teeth. The annotations were stored using the annotation and image markup standard to ensure consistency and reliability. The deep learning system employed a two-model approach: YOLOv5 for tooth detection and Attention U-Net for segmenting caries. The system achieved impressive results, demonstrating strong agreement with dentists for both caries counts and classifications (enamel, dentine, and pulp). However, some discrepancies exist, particularly in underestimating enamel caries. While the model occasionally overpredicts caries in healthy teeth (false positive), it prioritizes minimizing missed lesions (false negative), achieving a high recall of 0.96. Overall performance surpasses previously reported values, with an F1-score of 0.85 and an accuracy of 0.93 for caries segmentation in posterior teeth. The deep learning approach demonstrates promising potential to aid dentists in caries diagnosis, treatment planning, and dental education.

A deep learning algorithm for automatic 3D segmentation and quantification of hamstrings musculotendon injury from MRI.

Riem L, DuCharme O, Coggins A, Kenney A, Cousins M, Feng X, Hein R, Buford M, Lee K, Opar D, Heiderscheit B, Blemker SS

pubmed logopapersSep 29 2025
In high-velocity sports, hamstring strain injuries are common causes of missed play and have high rates of reinjury. Evaluating the severity and location of a hamstring strain injury, currently graded by a clinician using a semiqualitative muscle injury classification score (e.g. as one method, British Athletics Muscle Injury Classification - BAMIC) to describe edema presence and location, aids in guiding athlete recovery. In this study, automated artificial intelligence (AI) models were developed and deployed to automatically segment edema, hamstring muscle and tendon structures using T2-weighted and T1-weighted magnetic resonance images (MRI), respectively. MR scans were collected from collegiate football athletes at time-of-hamstring injury and return to sport. Volume, length, and cross-sectional (CSA) measurements were performed on all structures and subregions (i.e. free tendon and aponeurosis). The edema and hamstring muscle/tendon AI models compared favorably with ground-truth segmentations. AI volumetric output correlated with ground truth for edema (R = 0.97), hamstring muscles (R ≥ 0.99), and hamstring tendon (R ≥ 0.42) structures. Edema volume and percentage of muscle impacted by edema significantly increased with clinical BAMIC grade (p < 0.05). Taken together, these results demonstrate a promising new approach for AI-based quantification of edema which reflects differing levels of injury severity and supports clinical validity. Main Body.
Page 2 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.