Sort by:
Page 27 of 31302 results

Automated field-in-field planning for tangential breast radiation therapy based on digitally reconstructed radiograph.

Srikornkan P, Khamfongkhruea C, Intanin P, Thongsawad S

pubmed logopapersMay 12 2025
The tangential field-in-field (FIF) technique is a widely used method in breast radiation therapy, known for its efficiency and the reduced number of fields required in treatment planning. However, it is labor-intensive, requiring manual shaping of the multileaf collimator (MLC) to minimize hot spots. This study aims to develop a novel automated FIF planning approach for tangential breast radiation therapy using Digitally Reconstructed Radiograph (DRR) images. A total of 78 patients were selected to train and test a fluence map prediction model based on U-Net architecture. DRR images were used as input data to predict the fluence maps. The predicted fluence maps for each treatment plan were then converted into MLC positions and exported as Digital Imaging and Communications in Medicine (DICOM) files. These files were used to recalculate the dose distribution and assess dosimetric parameters for both the PTV and OARs. The mean absolute error (MAE) between the predicted and original fluence map was 0.007 ± 0.002. The result of gamma analysis indicates strong agreement between the predicted and original fluence maps, with gamma passing rate values of 95.47 ± 4.27 for the 3 %/3 mm criteria, 94.65 ± 4.32 for the 3 %/2 mm criteria, and 83.4 ± 12.14 for the 2 %/2 mm criteria. The plan quality, in terms of tumor coverage and doses to organs at risk (OARs), showed no significant differences between the automated FIF and original plans. The automated plans yielded promising results, with plan quality comparable to the original.

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

Insights into radiomics: a comprehensive review for beginners.

Mariotti F, Agostini A, Borgheresi A, Marchegiani M, Zannotti A, Giacomelli G, Pierpaoli L, Tola E, Galiffa E, Giovagnoni A

pubmed logopapersMay 12 2025
Radiomics and artificial intelligence (AI) are rapidly evolving, significantly transforming the field of medical imaging. Despite their growing adoption, these technologies remain challenging to approach due to their technical complexity. This review serves as a practical guide for early-career radiologists and researchers seeking to integrate radiomics into their studies. It provides practical insights for clinical and research applications, addressing common challenges, limitations, and future directions in the field. This work offers a structured overview of the essential steps in the radiomics workflow, focusing on concrete aspects of each step, including indicative and practical examples. It covers the main steps such as dataset definition, image acquisition and preprocessing, segmentation, feature extraction and selection, and AI model training and validation. Different methods to be considered are discussed, accompanied by summary diagrams. This review equips readers with the knowledge necessary to approach radiomics and AI in medical imaging from a hands-on research perspective.

Fully volumetric body composition analysis for prognostic overall survival stratification in melanoma patients.

Borys K, Lodde G, Livingstone E, Weishaupt C, Römer C, Künnemann MD, Helfen A, Zimmer L, Galetzka W, Haubold J, Friedrich CM, Umutlu L, Heindel W, Schadendorf D, Hosch R, Nensa F

pubmed logopapersMay 12 2025
Accurate assessment of expected survival in melanoma patients is crucial for treatment decisions. This study explores deep learning-based body composition analysis to predict overall survival (OS) using baseline Computed Tomography (CT) scans and identify fully volumetric, prognostic body composition features. A deep learning network segmented baseline abdomen and thorax CTs from a cohort of 495 patients. The Sarcopenia Index (SI), Myosteatosis Fat Index (MFI), and Visceral Fat Index (VFI) were derived and statistically assessed for prognosticating OS. External validation was performed with 428 patients. SI was significantly associated with OS on both CT regions: abdomen (P ≤ 0.0001, HR: 0.36) and thorax (P ≤ 0.0001, HR: 0.27), with lower SI associated with prolonged survival. MFI was also associated with OS on abdomen (P ≤ 0.0001, HR: 1.16) and thorax CTs (P ≤ 0.0001, HR: 1.08), where higher MFI was linked to worse outcomes. Lastly, VFI was associated with OS on abdomen CTs (P ≤ 0.001, HR: 1.90), with higher VFI linked to poor outcomes. External validation replicated these results. SI, MFI, and VFI showed substantial potential as prognostic factors for OS in malignant melanoma patients. This approach leveraged existing CT scans without additional procedural or financial burdens, highlighting the seamless integration of DL-based body composition analysis into standard oncologic staging routines.

Inference-specific learning for improved medical image segmentation.

Chen Y, Liu S, Li M, Han B, Xing L

pubmed logopapersMay 12 2025
Deep learning networks map input data to output predictions by fitting network parameters using training data. However, applying a trained network to new, unseen inference data resembles an interpolation process, which may lead to inaccurate predictions if the training and inference data distributions differ significantly. This study aims to generally improve the prediction accuracy of deep learning networks on the inference case by bridging the gap between training and inference data. We propose an inference-specific learning strategy to enhance the network learning process without modifying the network structure. By aligning training data to closely match the specific inference data, we generate an inference-specific training dataset, enhancing the network optimization around the inference data point for more accurate predictions. Taking medical image auto-segmentation as an example, we develop an inference-specific auto-segmentation framework consisting of initial segmentation learning, inference-specific training data deformation, and inference-specific segmentation refinement. The framework is evaluated on public abdominal, head-neck, and pancreas CT datasets comprising 30, 42, and 210 cases, respectively, for medical image segmentation. Experimental results show that our method improves the organ-averaged mean Dice by 6.2% (p-value = 0.001), 1.5% (p-value = 0.003), and 3.7% (p-value < 0.001) on the three datasets, respectively, with a more notable increase for difficult-to-segment organs (such as a 21.7% increase for the gallbladder [p-value = 0.004]). By incorporating organ mask-based weak supervision into the training data alignment learning, the inference-specific auto-segmentation accuracy is generally improved compared with the image intensity-based alignment. Besides, a moving-averaged calculation of the inference organ mask during the learning process strengthens both the robustness and accuracy of the final inference segmentation. By leveraging inference data during training, the proposed inference-specific learning strategy consistently improves auto-segmentation accuracy and holds the potential to be broadly applied for enhanced deep learning decision-making.

AI-based volumetric six-tissue body composition quantification from CT cardiac attenuation scans for mortality prediction: a multicentre study.

Yi J, Marcinkiewicz AM, Shanbhag A, Miller RJH, Geers J, Zhang W, Killekar A, Manral N, Lemley M, Buchwald M, Kwiecinski J, Zhou J, Kavanagh PB, Liang JX, Builoff V, Ruddy TD, Einstein AJ, Feher A, Miller EJ, Sinusas AJ, Berman DS, Dey D, Slomka PJ

pubmed logopapersMay 12 2025
CT attenuation correction (CTAC) scans are routinely obtained during cardiac perfusion imaging, but currently only used for attenuation correction and visual calcium estimation. We aimed to develop a novel artificial intelligence (AI)-based approach to obtain volumetric measurements of chest body composition from CTAC scans and to evaluate these measures for all-cause mortality risk stratification. We applied AI-based segmentation and image-processing techniques on CTAC scans from a large international image-based registry at four sites (Yale University, University of Calgary, Columbia University, and University of Ottawa), to define the chest rib cage and multiple tissues. Volumetric measures of bone, skeletal muscle, subcutaneous adipose tissue, intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and epicardial adipose tissue (EAT) were quantified between automatically identified T5 and T11 vertebrae. The independent prognostic value of volumetric attenuation and indexed volumes were evaluated for predicting all-cause mortality, adjusting for established risk factors and 18 other body composition measures via Cox regression models and Kaplan-Meier curves. The end-to-end processing time was less than 2 min per scan with no user interaction. Between 2009 and 2021, we included 11 305 participants from four sites participating in the REFINE SPECT registry, who underwent single-photon emission computed tomography cardiac scans. After excluding patients who had incomplete T5-T11 scan coverage, missing clinical data, or who had been used for EAT model training, the final study group comprised 9918 patients. 5451 (55%) of 9918 participants were male and 4467 (45%) of 9918 participants were female. Median follow-up time was 2·48 years (IQR 1·46-3·65), during which 610 (6%) patients died. High VAT, EAT, and IMAT attenuation were associated with an increased all-cause mortality risk (adjusted hazard ratio 2·39, 95% CI 1·92-2·96; p<0·0001, 1·55, 1·26-1·90; p<0·0001, and 1·30, 1·06-1·60; p=0·012, respectively). Patients with high bone attenuation were at reduced risk of death (0·77, 0·62-0·95; p=0·016). Likewise, high skeletal muscle volume index was associated with a reduced risk of death (0·56, 0·44-0·71; p<0·0001). CTAC scans obtained routinely during cardiac perfusion imaging contain important volumetric body composition biomarkers that can be automatically measured and offer important additional prognostic value. The National Heart, Lung, and Blood Institute, National Institutes of Health.

Automatic Quantification of Ki-67 Labeling Index in Pediatric Brain Tumors Using QuPath

Spyretos, C., Pardo Ladino, J. M., Blomstrand, H., Nyman, P., Snodahl, O., Shamikh, A., Elander, N. O., Haj-Hosseini, N.

medrxiv logopreprintMay 12 2025
AO_SCPLOWBSTRACTC_SCPLOWThe quantification of the Ki-67 labeling index (LI) is critical for assessing tumor proliferation and prognosis in tumors, yet manual scoring remains a common practice. This study presents an automated workflow for Ki-67 scoring in whole slide images (WSIs) using an Apache Groovy code script for QuPath, complemented by a Python-based post-processing script, providing cell density maps and summary tables. The tissue and cell segmentation are performed using StarDist, a deep learning model, and adaptive thresholding to classify Ki-67 positive and negative nuclei. The pipeline was applied to a cohort of 632 pediatric brain tumor cases with 734 Ki-67-stained WSIs from the Childrens Brain Tumor Network. Medulloblastoma showed the highest Ki-67 LI (median: 19.84), followed by atypical teratoid rhabdoid tumor (median: 19.36). Moderate values were observed in brainstem glioma-diffuse intrinsic pontine glioma (median: 11.50), high-grade glioma (grades 3 & 4) (median: 9.50), and ependymoma (median: 5.88). Lower indices were found in meningioma (median: 1.84), while the lowest were seen in low-grade glioma (grades 1 & 2) (median: 0.85), dysembryoplastic neuroepithelial tumor (median: 0.63), and ganglioglioma (median: 0.50). The results aligned with the consensus of the oncology, demonstrating a significant correlation in Ki-67 LI across most of the tumor families/types, with high malignancy tumors showing the highest proliferation indices and lower malignancy tumors exhibiting lower Ki-67 LI. The automated approach facilitates the assessment of large amounts of Ki-67 WSIs in research settings.

JSover: Joint Spectrum Estimation and Multi-Material Decomposition from Single-Energy CT Projections

Qing Wu, Hongjiang Wei, Jingyi Yu, S. Kevin Zhou, Yuyao Zhang

arxiv logopreprintMay 12 2025
Multi-material decomposition (MMD) enables quantitative reconstruction of tissue compositions in the human body, supporting a wide range of clinical applications. However, traditional MMD typically requires spectral CT scanners and pre-measured X-ray energy spectra, significantly limiting clinical applicability. To this end, various methods have been developed to perform MMD using conventional (i.e., single-energy, SE) CT systems, commonly referred to as SEMMD. Despite promising progress, most SEMMD methods follow a two-step image decomposition pipeline, which first reconstructs monochromatic CT images using algorithms such as FBP, and then performs decomposition on these images. The initial reconstruction step, however, neglects the energy-dependent attenuation of human tissues, introducing severe nonlinear beam hardening artifacts and noise into the subsequent decomposition. This paper proposes JSover, a fundamentally reformulated one-step SEMMD framework that jointly reconstructs multi-material compositions and estimates the energy spectrum directly from SECT projections. By explicitly incorporating physics-informed spectral priors into the SEMMD process, JSover accurately simulates a virtual spectral CT system from SE acquisitions, thereby improving the reliability and accuracy of decomposition. Furthermore, we introduce implicit neural representation (INR) as an unsupervised deep learning solver for representing the underlying material maps. The inductive bias of INR toward continuous image patterns constrains the solution space and further enhances estimation quality. Extensive experiments on both simulated and real CT datasets show that JSover outperforms state-of-the-art SEMMD methods in accuracy and computational efficiency.

Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).

Yan W, Xu Y, Yan S

pubmed logopapersMay 11 2025
BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.
Page 27 of 31302 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.