Sort by:
Page 12 of 24236 results

Automated field-in-field planning for tangential breast radiation therapy based on digitally reconstructed radiograph.

Srikornkan P, Khamfongkhruea C, Intanin P, Thongsawad S

pubmed logopapersMay 12 2025
The tangential field-in-field (FIF) technique is a widely used method in breast radiation therapy, known for its efficiency and the reduced number of fields required in treatment planning. However, it is labor-intensive, requiring manual shaping of the multileaf collimator (MLC) to minimize hot spots. This study aims to develop a novel automated FIF planning approach for tangential breast radiation therapy using Digitally Reconstructed Radiograph (DRR) images. A total of 78 patients were selected to train and test a fluence map prediction model based on U-Net architecture. DRR images were used as input data to predict the fluence maps. The predicted fluence maps for each treatment plan were then converted into MLC positions and exported as Digital Imaging and Communications in Medicine (DICOM) files. These files were used to recalculate the dose distribution and assess dosimetric parameters for both the PTV and OARs. The mean absolute error (MAE) between the predicted and original fluence map was 0.007 ± 0.002. The result of gamma analysis indicates strong agreement between the predicted and original fluence maps, with gamma passing rate values of 95.47 ± 4.27 for the 3 %/3 mm criteria, 94.65 ± 4.32 for the 3 %/2 mm criteria, and 83.4 ± 12.14 for the 2 %/2 mm criteria. The plan quality, in terms of tumor coverage and doses to organs at risk (OARs), showed no significant differences between the automated FIF and original plans. The automated plans yielded promising results, with plan quality comparable to the original.

Accelerating prostate rs-EPI DWI with deep learning: Halving scan time, enhancing image quality, and validating in vivo.

Zhang P, Feng Z, Chen S, Zhu J, Fan C, Xia L, Min X

pubmed logopapersMay 12 2025
This study aims to evaluate the feasibility and effectiveness of deep learning-based super-resolution techniques to reduce scan time while preserving image quality in high-resolution prostate diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). We retrospectively and prospectively analyzed prostate rs-EPI DWI data, employing deep learning super-resolution models, particularly the Multi-Scale Self-Similarity Network (MSSNet), to reconstruct low-resolution images into high-resolution images. Performance metrics such as structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR), and normalized root mean squared error (NRMSE) were used to compare reconstructed images against the high-resolution ground truth (HR<sub>GT</sub>). Additionally, we evaluated the apparent diffusion coefficient (ADC) values and signal-to-noise ratio (SNR) across different models. The MSSNet model demonstrated superior performance in image reconstruction, achieving maximum SSIM values of 0.9798, and significant improvements in PSNR and NRMSE compared to other models. The deep learning approach reduced the rs-EPI DWI scan time by 54.4 % while maintaining image quality comparable to HR<sub>GT</sub>. Pearson correlation analysis revealed a strong correlation between ADC values from deep learning-reconstructed images and the ground truth, with differences remaining within 5 %. Furthermore, all models showed significant SNR enhancement, with MSSNet performing best across most cases. Deep learning-based super-resolution techniques, particularly MSSNet, effectively reduce scan time and enhance image quality in prostate rs-EPI DWI, making them promising tools for clinical applications.

Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).

Yan W, Xu Y, Yan S

pubmed logopapersMay 11 2025
BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.

Learning-based multi-material CBCT image reconstruction with ultra-slow kV switching.

Ma C, Zhu J, Zhang X, Cui H, Tan Y, Guo J, Zheng H, Liang D, Su T, Sun Y, Ge Y

pubmed logopapersMay 11 2025
ObjectiveThe purpose of this study is to perform multiple (<math xmlns="http://www.w3.org/1998/Math/MathML"><mo>≥</mo><mn>3</mn></math>) material decomposition with deep learning method for spectral cone-beam CT (CBCT) imaging based on ultra-slow kV switching.ApproachIn this work, a novel deep neural network called SkV-Net is developed to reconstruct multiple material density images from the ultra-sparse spectral CBCT projections acquired using the ultra-slow kV switching technique. In particular, the SkV-Net has a backbone structure of U-Net, and a multi-head axial attention module is adopted to enlarge the perceptual field. It takes the CT images reconstructed from each kV as input, and output the basis material images automatically based on their energy-dependent attenuation characteristics. Numerical simulations and experimental studies are carried out to evaluate the performance of this new approach.Main ResultsIt is demonstrated that the SkV-Net is able to generate four different material density images, i.e., fat, muscle, bone and iodine, from five spans of kV switched spectral projections. Physical experiments show that the decomposition errors of iodine and CaCl<math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mrow></mrow><mn>2</mn></msub></math> are less than 6<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>%</mi></math>, indicating high precision of this novel approach in distinguishing materials.SignificanceSkV-Net provides a promising multi-material decomposition approach for spectral CBCT imaging systems implemented with the ultra-slow kV switching scheme.

Study on predicting breast cancer Ki-67 expression using a combination of radiomics and deep learning based on multiparametric MRI.

Wang W, Wang Z, Wang L, Li J, Pang Z, Qu Y, Cui S

pubmed logopapersMay 11 2025
To develop a multiparametric breast MRI radiomics and deep learning-based multimodal model for predicting preoperative Ki-67 expression status in breast cancer, with the potential to advance individualized treatment and precision medicine for breast cancer patients. We included 176 invasive breast cancer patients who underwent breast MRI and had Ki-67 results. The dataset was randomly split into training (70 %) and test (30 %) sets. Features from T1-weighted imaging (T1WI), diffusion-weighted imaging (DWI), T2-weighted imaging (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were fused. Separate models were created for each sequence: T1, DWI, T2, and DCE. A multiparametric MRI (mp-MRI) model was then developed by combining features from all sequences. Models were trained using five-fold cross-validation and evaluated on the test set with receiver operating characteristic (ROC) curve area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. Delong's test compared the mp-MRI model with the other models, with P < 0.05 indicating statistical significance. All five models demonstrated good performance, with AUCs of 0.83 for the T1 model, 0.85 for the DWI model, 0.90 for the T2 model, 0.92 for the DCE model, and 0.96 for the mp-MRI model. Delong's test indicated statistically significant differences between the mp-MRI model and the other four models, with P values < 0.05. The multiparametric breast MRI radiomics and deep learning-based multimodal model performs well in predicting preoperative Ki-67 expression status in breast cancer.

The March to Harmonized Imaging Standards for Retinal Imaging.

Gim N, Ferguson AN, Blazes M, Lee CS, Lee AY

pubmed logopapersMay 11 2025
The adoption of standardized imaging protocols in retinal imaging is critical to overcoming challenges posed by fragmented data formats across devices and manufacturers. The lack of standardization hinders clinical interoperability, collaborative research, and the development of artificial intelligence (AI) models that depend on large, high-quality datasets. The Digital Imaging and Communication in Medicine (DICOM) standard offers a robust solution for ensuring interoperability in medical imaging. Although DICOM is widely utilized in radiology and cardiology, its adoption in ophthalmology remains limited. Retinal imaging modalities such as optical coherence tomography (OCT), fundus photography, and OCT angiography (OCTA) have revolutionized retinal disease management but are constrained by proprietary and non-standardized formats. This review underscores the necessity for harmonized imaging standards in ophthalmology, detailing DICOM standards for retinal imaging including ophthalmic photography (OP), OCT, and OCTA, and their requisite metadata information. Additionally, the potential of DICOM standardization for advancing AI applications in ophthalmology is explored. A notable example is the Artificial Intelligence Ready and Equitable Atlas for Diabetes Insights (AI-READI) dataset, the first publicly available standards-compliant DICOM retinal imaging dataset. This dataset encompasses diverse retinal imaging modalities, including color fundus photography, infrared, autofluorescence, OCT, and OCTA. By leveraging multimodal retinal imaging, AI-READI provides a transformative resource for studying diabetes and its complications, setting a blueprint for future datasets aimed at harmonizing imaging formats and enabling AI-driven breakthroughs in ophthalmology. Our manuscript also addresses challenges in retinal imaging for diabetic patients, retinal imaging-based AI applications for studying diabetes, and potential advancements in retinal imaging standardization.

A systematic review and meta-analysis of the utility of quantitative, imaging-based approaches to predict radiation-induced toxicity in lung cancer patients.

Tong D, Midroni J, Avison K, Alnassar S, Chen D, Parsa R, Yariv O, Liu Z, Ye XY, Hope A, Wong P, Raman S

pubmed logopapersMay 11 2025
To conduct a systematic review and meta-analysis of the performance of radiomics, dosiomics and machine learning in generating toxicity prediction in thoracic radiotherapy. An electronic database search was conducted and dual-screened by independent authors to identify eligible studies for systematic review and meta-analysis. Data was extracted and study quality was assessed using TRIPOD for machine learning studies, RQS for Radiomics and RoB for dosiomics. 10,703 studies were identified, and 5252 entered screening. 106 studies including 23,373 patients were eligible for systematic review. Primary toxicity predicted was radiation pneumonitis (81), followed by esophagitis (12) and lymphopenia (4). Fourty-two studies studying radiation pneumonitis were eligible for meta-analysis, with pooled area-under-curve (AUC) of 0.82 (95% CI 0.79-0.85). Studies with machine learning had the best performance, with classical and deep learning models having similar performance. There is a trend towards an improvement of the performance of models with the year of publication. There is variability in study quality among the three study categories and dosiomic studies scored the highest among these. Publication bias was not observed. The majority of existing literature using radiomics, dosiomics and machine learning has focused on radiation pneumonitis prediction. Future research should focus on toxicity prediction of other organs at risk and the adoption of these models into clinical practice.

Intra- and Peritumoral Radiomics Based on Ultrasound Images for Preoperative Differentiation of Follicular Thyroid Adenoma, Carcinoma, and Follicular Tumor With Uncertain Malignant Potential.

Fu Y, Mei F, Shi L, Ma Y, Liang H, Huang L, Fu R, Cui L

pubmed logopapersMay 10 2025
Differentiating between follicular thyroid adenoma (FTA), carcinoma (FTC), and follicular tumor with uncertain malignant potential (FT-UMP) remains challenging due to their overlapping ultrasound characteristics. This retrospective study aimed to enhance preoperative diagnostic accuracy by utilizing intra- and peritumoral radiomics based on ultrasound images. We collected post-thyroidectomy ultrasound images from 774 patients diagnosed with FTA (n = 429), FTC (n = 158), or FT-UMP (n = 187) between January 2018 and December 2023. Six peritumoral regions were expanded by 5%-30% in 5% increments, with the segment-anything model utilizing prompt learning to detect the field of view and constrain the expanded boundaries. A stepwise classification strategy addressing three tasks was implemented: distinguishing FTA from the other types (task 1), differentiating FTC from FT-UMP (task 2), and classifying all three tumors. Diagnostic models were developed by combining radiomic features from tumor and peritumoral regions with clinical characteristics. Clinical characteristics combined with intratumoral and 5% peritumoral radiomic features performed best across all tasks (Test set: area under the curves, 0.93 for task 1 and 0.90 for task 2; diagnostic accuracy, 79.9%). The DeLong test indicated that all peritumoral radiomics significantly improved intratumoral radiomics performance and clinical characteristics (p < 0.04). The 5% peritumoral regions showed the best performance, though not all results were significant (p = 0.01-0.91). Ultrasound-based intratumoral and peritumoral radiomics can significantly enhance preoperative diagnostic accuracy for FTA, FTC, and FT-UMP, leading to improved treatment strategies and patient outcomes. Furthermore, the 5% peritumoral area may indicate regions of potential tumor invasion requiring further investigation.

Radiomics prediction of surgery in ulcerative colitis refractory to medical treatment.

Sakamoto K, Okabayashi K, Seishima R, Shigeta K, Kiyohara H, Mikami Y, Kanai T, Kitagawa Y

pubmed logopapersMay 10 2025
The surgeries in drug-resistant ulcerative colitis are determined by complex factors. This study evaluated the predictive performance of radiomics analysis on the basis of whether patients with ulcerative colitis in hospital were in the surgical or medical treatment group by discharge from hospital. This single-center retrospective cohort study used CT at admission of patients with US admitted from 2015 to 2022. The target of prediction was whether the patient would undergo surgery by the time of discharge. Radiomics features were extracted using the rectal wall at the level of the tailbone tip of the CT as the region of interest. CT data were randomly classified into a training cohort and a validation cohort, and LASSO regression was performed using the training cohort to create a formula for calculating the radiomics score. A total of 147 patients were selected, and data from 184 CT scans were collected. Data from 157 CT scans matched the selection criteria and were included. Five features were used for the radiomics score. Univariate logistic regression analysis of clinical information detected a significant influence of severity (p < 0.001), number of drugs used until surgery (p < 0.001), Lichtiger score (p = 0.024), and hemoglobin (p = 0.010). Using a nomogram combining these items, we found that the discriminatory power in the surgery and medical treatment groups was AUC 0.822 (95% confidence interval (CI) 0.841-0.951) for the training cohort and AUC 0.868 (95% CI 0.729-1.000) for the validation cohort, indicating a good ability to discriminate the outcomes. Radiomics analysis of CT images of patients with US at the time of admission, combined with clinical data, showed high predictive ability regarding a treatment strategy of surgery or medical treatment.

A novel framework for esophageal cancer grading: combining CT imaging, radiomics, reproducibility, and deep learning insights.

Alsallal M, Ahmed HH, Kareem RA, Yadav A, Ganesan S, Shankhyan A, Gupta S, Joshi KK, Sameer HN, Yaseen A, Athab ZH, Adil M, Farhood B

pubmed logopapersMay 10 2025
This study aims to create a reliable framework for grading esophageal cancer. The framework combines feature extraction, deep learning with attention mechanisms, and radiomics to ensure accuracy, interpretability, and practical use in tumor analysis. This retrospective study used data from 2,560 esophageal cancer patients across multiple clinical centers, collected from 2018 to 2023. The dataset included CT scan images and clinical information, representing a variety of cancer grades and types. Standardized CT imaging protocols were followed, and experienced radiologists manually segmented the tumor regions. Only high-quality data were used in the study. A total of 215 radiomic features were extracted using the SERA platform. The study used two deep learning models-DenseNet121 and EfficientNet-B0-enhanced with attention mechanisms to improve accuracy. A combined classification approach used both radiomic and deep learning features, and machine learning models like Random Forest, XGBoost, and CatBoost were applied. These models were validated with strict training and testing procedures to ensure effective cancer grading. This study analyzed the reliability and performance of radiomic and deep learning features for grading esophageal cancer. Radiomic features were classified into four reliability levels based on their ICC (Intraclass Correlation) values. Most of the features had excellent (ICC > 0.90) or good (0.75 < ICC ≤ 0.90) reliability. Deep learning features extracted from DenseNet121 and EfficientNet-B0 were also categorized, and some of them showed poor reliability. The machine learning models, including XGBoost and CatBoost, were tested for their ability to grade cancer. XGBoost with Recursive Feature Elimination (RFE) gave the best results for radiomic features, with an AUC (Area Under the Curve) of 91.36%. For deep learning features, XGBoost with Principal Component Analysis (PCA) gave the best results using DenseNet121, while CatBoost with RFE performed best with EfficientNet-B0, achieving an AUC of 94.20%. Combining radiomic and deep features led to significant improvements, with XGBoost achieving the highest AUC of 96.70%, accuracy of 96.71%, and sensitivity of 95.44%. The combination of both DenseNet121 and EfficientNet-B0 models in ensemble models achieved the best overall performance, with an AUC of 95.14% and accuracy of 94.88%. This study improves esophageal cancer grading by combining radiomics and deep learning. It enhances diagnostic accuracy, reproducibility, and interpretability, while also helping in personalized treatment planning through better tumor characterization. Not applicable.
Page 12 of 24236 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.