Sort by:
Page 25 of 6046038 results

Haotian Feng, Ke Sheng

arxiv logopreprintOct 15 2025
We develop and validate a novel spherical radiomics framework for predicting key molecular biomarkers using multiparametric MRI. Conventional Cartesian radiomics extract tumor features on orthogonal grids, which do not fully capture the tumor's radial growth patterns and can be insensitive to evolving molecular signatures. In this study, we analyzed GBM radiomic features on concentric 2D shells, which were then mapped onto 2D planes for radiomics analysis. Radiomic features were extracted using PyRadiomics from four different regions in GBM. Feature selection was performed using ANOVA F-statistics. Classification was conducted with multiple machine-learning models. Model interpretability was evaluated through SHAP analysis, clustering analysis, feature significance profiling, and comparison between radiomic patterns and underlying biological processes. Spherical radiomics consistently outperformed conventional 2D and 3D Cartesian radiomics across all prediction tasks. The best framework reached an AUC of 0.85 for MGMT, 0.80 for EGFR, 0.80 for PTEN, and 0.83 for survival prediction. GLCM-derived features were identified as the most informative predictors. Radial transition analysis using the Mann-Whitney U-test demonstrates that transition slopes between T1-weighted contrast-enhancing and T2/FLAIR hyperintense lesion regions, as well as between T2 intense lesion and a 2 cm peritumoral expansion region, are significantly associated with biomarker status. Furthermore, the observed radiomic changes along the radial direction closely reflected known biological characteristics. Radiomic features extracted on the spherical surfaces at varying radial distances to the GBM tumor centroid are better correlated with important tumor molecular markers and patient survival than the conventional Cartesian analysis.

Chun Wai Chin, Haniza Yazid, Hoi Leong Lee

arxiv logopreprintOct 15 2025
Medical image enhancement is crucial for improving the quality and interpretability of diagnostic images, ultimately supporting early detection, accurate diagnosis, and effective treatment planning. Despite advancements in imaging technologies such as X-ray, CT, MRI, and ultrasound, medical images often suffer from challenges like noise, artifacts, and low contrast, which limit their diagnostic potential. Addressing these challenges requires robust preprocessing, denoising algorithms, and advanced enhancement methods, with deep learning techniques playing an increasingly significant role. This systematic literature review, following the PRISMA approach, investigates the key challenges, recent advancements, and evaluation metrics in medical image enhancement. By analyzing findings from 39 peer-reviewed studies, this review provides insights into the effectiveness of various enhancement methods across different imaging modalities and the importance of evaluation metrics in assessing their impact. Key issues like low contrast and noise are identified as the most frequent, with MRI and multi-modal imaging receiving the most attention, while specialized modalities such as histopathology, endoscopy, and bone scintigraphy remain underexplored. Out of the 39 studies, 29 utilize conventional mathematical methods, 9 focus on deep learning techniques, and 1 explores a hybrid approach. In terms of image quality assessment, 18 studies employ both reference-based and non-reference-based metrics, 9 rely solely on reference-based metrics, and 12 use only non-reference-based metrics, with a total of 65 IQA metrics introduced, predominantly non-reference-based. This review highlights current limitations, research gaps, and potential future directions for advancing medical image enhancement.

Lee HS, Song SH, Park C, Seo J, Kim WH, Kim J, Kim S, Han K, Lee YH

pubmed logopapersOct 15 2025
Large language models (LLMs) such as GPT-4 are increasingly used to simplify radiology reports and improve patient comprehension. However, excessive simplification may undermine informed consent and autonomy by compromising clinical accuracy. This study investigates the ethical implications of readability thresholds in AI-generated radiology reports, identifying the minimum reading level at which clinical accuracy is preserved. We retrospectively analyzed 500 computed tomography and magnetic resonance imaging reports from a tertiary hospital. Each report was transformed into 17 versions (reading grade levels 1-17) using GPT-4 Turbo. Readability metrics and word counts were calculated for each version. Clinical accuracy was evaluated using radiologist assessments and PubMed-BERTScore. We identified the first grade level at which a statistically significant decline in accuracy occurred, determining the lowest level that preserved both accuracy and readability. We further assessed potential clinical consequences in reports simplified to the 7th-grade level. Readability scores showed strong correlation with prompted reading levels (r = 0.80-0.84). Accuracy remained stable across grades 13-11 but declined significantly below grade 11. At the 7th-grade level, 20% of reports contained inaccuracies with potential to alter patient management, primarily due to omission, incorrect conversion, or inappropriate generalization. The 11th-grade level emerged as the current lower bound for preserving accuracy in LLM-generated radiology reports. Our findings highlight an ethical tension between improving readability and maintaining clinical accuracy. While 7th-grade readability remains an ethical ideal, current AI tools cannot reliably produce accurate reports below the 11th-grade level. Ethical implementation of AI-generated reporting should include layered communication strategies and model transparency to safeguard patient autonomy and comprehension.

Nakai H, Froemming AT, Takahashi H, Adamo DA, Kawashima A, LeGout JD, Kurata Y, Gloe JN, Borisch EA, Riederer SJ, Takahashi N

pubmed logopapersOct 15 2025
To evaluate the impact of gas-induced artifacts in diffusion-weighted imaging (DWI) on prostate MRI cancer detection rate (CDR). This three-center retrospective study included 34,697 MRI examinations between 2017 and 2022. Seven radiologists categorized the degree of gas-induced artifacts of 1595 DWI series into optimal, mild, moderate, and severe. Then, a deep learning model categorizing artifact severity was developed to help identify series with gas-induced artifacts. After excluding series used for training the model, the model was applied to 12,594 DWI series, which were performed for patients without documented prostate cancer. Of these, radiologists reviewed the bottom 300 series predicted as poor image quality and recategorized them if necessary. Case-control matching was performed to compare CDR. Examinations categorized by radiologists as mild-severe were used as target groups, while those categorized as optimal by either radiologists or the model were used to construct matched control groups. CDR was defined as the number of examinations assigned PI-RADS ≥ 3 with pathologically proven clinically significant cancer divided by the total number of examinations. The degree of CDR reduction was evaluated using the chi-squared test. The target groups included 632 examinations (66.0 ± 9.5 years). The CDR in the target and matched control groups, respectively, for each artifact grade were as follows: severe (n = 141) vs optimal (n = 705), 0.24 vs 0.26, p = 0.58; moderate (n = 161) vs optimal (n = 966), 0.25 vs 0.24, p = 0.84; mild (n = 330) vs optimal (n = 1320), 0.25 vs 0.22, p = 0.17. No evidence was found that gas-induced DWI artifacts reduce the CDR of prostate MRI. The CDR of prostate MRI was not significantly reduced by susceptibility artifacts from rectal gas, which will be one consideration in rectal preparation protocols. Gas-induced susceptibility artifact is a common issue in prostate MRI. The CDR decreased as the degree of artifacts increased. But there was no significant reduction even in severe artifact cases.

Fan M, Wu X, Pan D, Du J, Gao X, Wang Y, Li L

pubmed logopapersOct 15 2025
Genomic traits are commonly observed across cancer types, yet current pan-cancer analyses primarily focus on shared molecular features, often overlooking potential imaging characteristics across cancers. This retrospective study included 793 patients from the I-SPY1 breast cancer cohort (n = 145), Duke-UPenn glioblastoma (GBM) cohort (n = 452), and an external validation cohort (n = 196). We developed and validated multiparametric MRI-based radiomic and deep learning models to extract both cancer-type common (CTC) and cancer type-specific (CTS) features associated with the prognosis of both cancers. The biological relevance of the identified CTC features was investigated through pathway analysis. Seven CTC radiomic features were identified, demonstrating superior survival prediction compared to cancer type-specific (CTS) features, with AUCs of 0.876 for breast cancer and 0.732 for GBM. The deep feature model stratified patients into distinct survival groups (p = 0.00029 for breast cancer; p = 0.0019 for GBM), with CTC features contributing more than CTS features. Independent validation confirmed their robustness (AUC: 0.784). CTC-associated genes were enriched in key pathways, including focal adhesion, suggesting a role in breast cancer brain metastasis. Our study reveals pan-cancer imaging phenotypes that predict survival and provide biological insights, highlighting their potential in precision oncology.

Feng L, Hao G, Jian W

pubmed logopapersOct 15 2025
Detecting and segmenting brain tumors from 3D MRI images is a challenging and time-intensive task for clinicians. This research introduces an innovative hybrid architecture for deep learning, comprising a 3D fully convolutional neural network (3D-FCNN), an interval type-2 fuzzy weighting system, and a hybrid transit search optimization-particle swarm optimization (Hybrid-TSO-PSO) algorithm. The proposed models, 3D-FCNN-Hybrid-TSO-PSO and 3D-FCNN-SVM, employ type-2 fuzzy weighting to diminish the quantity of trainable parameters and expedite training on MRI volumetric data. The Hybrid-TSO-PSO optimization approach integrates the heuristic strengths of TSO with the rapid convergence attributes of PSO, enhancing learning stability and augmenting the precision of segmentation and classification. Assessments were conducted on the BraTS 2019, BraTS 2020, and a portion of the BraTS 2021 datasets, comprising 300 3D MRI images (230 high-grade HGG and 70 low-grade LGG glioma specimens). During the testing phase, the 3D-FCNN-Hybrid-TSO-PSO model attained an accuracy of 98.1%, sensitivity of 98.9%, specificity of 95.0%, and a Dice score of 0.987, whereas the 3D-FCNN-SVM model earned an accuracy of 95.2%. This method not only enhances accuracy but also decreases training duration by as much as sixfold relative to traditional architectures, serving as an efficient and precise diagnostic aid for the identification and classification of brain cancers.

Felek T, Tercanlı H, Gök RŞ

pubmed logopapersOct 15 2025
The aim of this systematic review is to compare the efficacy of convolutional neural networks (CNN) and Vision Transformers (ViT) in the field of dental imaging, in order to examine in depth the potential, advantages, and limitations of both models in this domain. The search strings used in the study were "(("Vision Transformer" OR ViT OR "Transformer architecture") AND ("Convolutional Neural Network" OR CNN OR ConvNet) AND (Dental OR Dentistry OR "Maxillofacial" OR "Oral Radiology") AND (Image OR Imaging OR Radiograph))". The search was conducted in January 2025. Two investigators independently evaluated the full texts of all eligible articles and excluded those that did not meet the inclusion/exclusion criteria. Of 2596 articles, 21 met the inclusion criteria. Depending on the task category, of the 21 studies that were reviewed, 14 (66.7%) utilized classification, while 7 (33.3%) utilized segmentation. Panoramic radiography is the most commonly used imaging modality (52.3%) and the ViT-based model was observed to have the highest performance (58%). ViT-based deep learning models tend to exhibit higher performance in many dental image analysis scenarios compared to traditional convolutional neural networks. However, in practice CNN and ViT approaches can be used in a complementary manner.

Wang D, Sun Y, Chen H, Zhao X

pubmed logopapersOct 15 2025
Due to the scalability issues of transformers and the limitations of CNN's lack of typical inductive bias, their applications in a wider range of fields are somewhat restricted. Therefore, the hybrid network architecture that combines the advantages of convolution and Transformer is gradually becoming a hot research and application direction. This article proposes an enhanced dual encoder network (EDE-Net) that integrates convolution and pyramid transformers for medical image segmentation. Specifically, we apply convolutional kernels and pyramid transformer structures in parallel in the encoder to extract features, ensuring that the network can capture local details and global semantic information. To efficiently fuse local details information and global features at each downsampling stage, we introduce the phase-based iterative feature fusion module (PIFF). The PIFF module first combines local details and global features and then assigns distinct weight coefficients to each, distinguishing their importance for foreground pixel classification. By effectively balancing the significance of local details and global features, the PIFF module enhances the network's ability to delineate fine lesion edges. Experimental results on the GlaS and MoNuSeg datasets validate the effectiveness of this approach. On these two publicly available datasets, our EDE-Net significantly outperforms previous CNN-based (such as UNet) and transformer-based (such as Swin-UNet) algorithms.

Zimmerman D, Mandal AS, Jung B, Buczek MJ, Schabdach JM, Karandikar S, Kafadar E, Gardner M, Daniali M, Mercedes L, Kohler S, Abdel-Qader L, Gur RE, Roalf DR, Satterthwaite TD, Williams R, Padmanabhan V, Seidlitz J, White LK, Sotardi S, Schmitt JE, Vossough A, Alexander-Bloch A

pubmed logopapersOct 15 2025
Progress at the intersection of artificial intelligence and paediatric neuroimaging necessitates large, heterogeneous datasets to generate robust and generalisable models. Retrospective analysis of clinical brain MRI scans offers a promising avenue to augment prospective research datasets, leveraging the extensive repositories of scans routinely acquired by hospital systems in the course of clinical care. Here, we present a systematic protocol for identifying 'scans with limited imaging pathology' through machine-assisted manual review of radiology reports. The protocol employs a standardised grading scheme developed with expert neuroradiologists and implemented by non-clinician graders. Categorising scans based on the presence or absence of significant pathology and image quality concerns facilitates the repurposing of clinical brain MRI data for brain research. Such an approach has the potential to harness vast clinical imaging archives-exemplified by over 250 000 brain MRIs at the Children's Hospital of Philadelphia-to address demographic biases in research participation, to increase sample size and to improve replicability in neurodevelopmental imaging research. Ultimately, this protocol aims to enable scalable, reliable identification of clinical control brain MRIs, supporting large-scale, generalisable neuroimaging studies of typical brain development and neurogenetic conditions. Studies using datasets generated from this protocol will be disseminated in peer-reviewed journals and at academic conferences.

Kloth C, Brendel JM, Kübler J, Winkelmann MT, Brunner H, Sollmann N, Schmidt SA, Beer M, Nikolaou K, Krumm P

pubmed logopapersOct 15 2025
CT-based fractional flow reserve (CT-FFR) is a promising noninvasive method for the functional assessment of coronary stenosis. It expands the diagnostic capabilities of coronary CT angiography (cCTA) by providing hemodynamic information and potentially reducing unnecessary invasive coronary angiography examinationsThis review summarizes current technological developments, study results, and clinical applications of CT-FFR. It also discusses the advantages and disadvantages of various software solutions, including artificial intelligence (AI)-based on-site analyses, and their potential integration into the clinical routine.Studies show that CT-FFR improves diagnostic accuracy compared to cCTA and can optimize patient management. Advances in artificial intelligence and new imaging techniques such as photon-counting CT could further refine CT-FFR and expand its applicability. Despite promising results, further research is needed regarding long-term validation, standardized workflows, and economic feasibility.CT-FFR is a promising complementary tool for assessing the hemodynamic relevance of coronary stenoses. CT-FFR is particularly helpful in complex, long-segment, or consecutive stenosis, because a purely anatomical visual examination is not always sufficient. The combination of technical innovations and AI-assisted image analysis could have the potential to transform noninvasive coronary diagnostics. · CT-FFR increases specificity and diagnostic accuracy compared to cCTA alone.. · Technological advances could further refine CT-FFR and expand its applicability.. · The increasing adoption and improved applicability of CT-FFR in routine clinical practice is promising.. · Kloth C, Brendel JM, Kübler J et al. CT-FFR: How a new technology could transform cardiovascular diagnostic imaging. Rofo 2025; DOI 10.1055/a-2697-5413.
Page 25 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.