Sort by:
Page 56 of 3463455 results

Improved brain tumor classification through DenseNet121 based transfer learning.

Rasheed M, Jaffar MA, Akram A, Rashid J, Alshalali TAN, Irshad A, Sarwar N

pubmed logopapersAug 27 2025
Brain tumors have a big effect on a person's health by letting abnormal cells grow unchecked in the brain. This means that early and correct diagnosis is very important for effective treatment. Many of the current diagnostic methods are time-consuming, rely primarily on hand interpretation, and frequently yield unsatisfactory results. This work finds brain tumors in MRI data using DenseNet121 architecture with transfer learning. Model training made use of the Kaggle dataset. By preprocessing the stage, resizing the MRI pictures to minimize noise would help the model perform better. From one MRI scan, the proposed approach divides brain tissues into four groups: benign tumors, gliomas, meningiomas, and pituitary gland malignancies. The designed DenseNet121 architecture precisely classifies brain cancers. We assessed the models' performance in terms of accuracy, precision, recall, and F1-score. The suggested approach proved successful in the multi-class categorization of brain tumors since it attained an average accuracy improvement of 96.90%. Unlike previous diagnostic techniques, such as eye inspection and other machine learning models, the proposed DenseNet121-based approach is more accurate, takes less time to analyze, and requires less human input. Although the automated method ensures consistent and predictable results, human error sometimes causes more unpredictability in conventional methods. Based on MRI-based detection and transfer learning, this paper proposes an automated method for the classification of brain cancers. The method improves the precision and speed of brain tumor diagnosis, which benefits both MRI-based classification research and clinical use. The development of deep-learning models may even further improve tumor identification and prognosis prediction.

Robust Quantification of Affected Brain Volume from Computed Tomography Perfusion: A Hybrid Approach Combining Deep Learning and Singular Value Decomposition.

Kim GY, Yang HS, Hwang J, Lee K, Choi JW, Jung WS, Kim REY, Kim D, Lee M

pubmed logopapersAug 27 2025
Volumetric estimation of affected brain volumes using computed tomography perfusion (CTP) is crucial in the management of acute ischemic stroke (AIS) and relies on commercial software, which has limitations such as variations in results due to image quality. To predict affected brain volume accurately and robustly, we propose a hybrid approach that integrates singular value decomposition (SVD), deep learning (DL), and machine learning (ML) techniques. We included 449 CTP images of patients with AIS with manually annotated vessel landmarks provided by expert radiologists, collected between 2021 and 2023. We developed a CNN-based approach for predicting eight vascular landmarks from CTP images, integrating ML components. We then used SVD-related methods to generate perfusion maps and compared the results with those of the RapidAI software (RapidAI, Menlo Park, California). The proposed CNN model achieved an average Euclidean distance error of 4.63 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 2.00 mm on the vessel localization. Without the ML components, compared to RapidAI, our method yielded concordance correlation coefficient (CCC) scores of 0.898 for estimating volumes with cerebral blood flow (CBF) < 30% and 0.715 for Tmax > 6 s. Using the ML method, it achieved CCC scores of 0.905 for CBF < 30% and 0.879 for Tmax > 6 s. For the data assessment, it achieved 0.8 accuracy. We developed a robust hybrid model combining DL and ML techniques for volumetric estimation of affected brain volumes using CTP in patients with AIS, demonstrating improved accuracy and robustness compared to existing commercial solutions.

Deep Learning-Based 3D and 2D Approaches for Skeletal Muscle Segmentation on Low-Dose CT Images.

Timpano G, Veltri P, Vizza P, Cascini GL, Manti F

pubmed logopapersAug 27 2025
Automated segmentation of skeletal muscle from computed tomography (CT) images is essential for large-scale quantitative body composition analysis. However, manual segmentation is time-consuming and impractical for routine or high-throughput use. This study presents a systematic comparison of two-dimensional (2D) and three-dimensional (3D) deep learning architectures for segmenting skeletal muscle at the anatomically standardized level of the third lumbar vertebra (L3) in low-dose computed tomography (LDCT) scans. We implemented and evaluated the DeepLabv3+ (2D) and UNet3+ (3D) architectures on a curated dataset of 537 LDCT scans, applying preprocessing protocols, L3 slice selection, and region of interest extraction. The model performance was evaluated using a comprehensive set of evaluation metrics, including Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95). DeepLabv3+ achieved the highest segmentation accuracy (DSC = 0.982 ± 0.010, HD95 = 1.04 ± 0.46 mm), while UNet3+ showed competitive performance (DSC = 0.967 ± 0.013, HD95 = 1.27 ± 0.58 mm) with 26 times fewer parameters (1.27 million vs. 33.6 million) and lower inference time. Both models exceeded or matched results reported in the recent CT-based muscle segmentation literature. This work offers practical insights into architecture selection for automated LDCT-based muscle segmentation workflows, with a focus on the L3 vertebral level, which remains the gold standard in muscle quantification protocols.

Ultra-Low-Dose CTPA Using Sparse Sampling CT Combined with the U-Net for Deep Learning-Based Artifact Reduction: An Exploratory Study.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

Two stage large language model approach enhancing entity classification and relationship mapping in radiology reports.

Shin C, Eom D, Lee SM, Park JE, Kim K, Lee KH

pubmed logopapersAug 27 2025
Large language models (LLMs) hold transformative potential for medical image labeling in radiology, addressing challenges posed by linguistic variability in reports. We developed a two-stage natural language processing pipeline that combines Bidirectional Encoder Representations from Transformers (BERT) and an LLM to analyze radiology reports. In the first stage (Entity Key Classification), BERT model identifies and classifies clinically relevant entities mentioned in the text. In the second stage (Relationship Mapping), the extracted entities are incorporated into the LLM to infer relationships between entity pairs, considering actual presence of entity. The pipeline targets lesion-location mapping in chest CT and diagnosis-episode mapping in brain MRI, both of which are clinically important for structuring radiologic findings and capturing temporal patterns of disease progression. Using over 400,000 reports from Seoul Asan Medical Center, our pipeline achieved a macro F1-score of 77.39 for chest CT and 70.58 for brain MRI. These results highlight the effectiveness of integrating BERT with an LLM to enhance diagnostic accuracy in radiology report analysis.

PWLS-SOM: alternative PWLS reconstruction for limited-view CT by strategic optimization of a deep learning model.

Chen C, Zhang L, Xing Y, Chen Z

pubmed logopapersAug 27 2025
While deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by limited-view computed tomography (CT), their generalization to practical applications remains challenging. To address this challenge, we aim to develop a novel approach that integrates DL priors with targeted-case data consistency for improved artifact suppression and robust reconstruction.&#xD;Approach: We propose an alternative Penalized Weighted Least Squares reconstruction framework by Strategic Optimization of a DL Model (PWLS-SOM). This framework combines data-driven DL priors with data consistency constraints in a three-stage process: (1) Group-level embedding: DL network parameters are optimized on a large-scale paired dataset to learn general artifact elimination. (2) Significance evaluation: A novel significance score quantifies the contribution of DL model parameters, guiding the subsequent strategic adaptation. (3) Individual-level consistency adaptation: PWLS-driven strategic optimization further adapts DL parameters for target-specific projection data.&#xD;Main Results: Experiments were conducted on sparse-view (90 views) circular trajectory CT data and a multi-segment linear trajectory CT scan with a mixed data missing problem. PWLS-SOM reconstruction demonstrated superior generalization across variations in patients, anatomical structures, and data distributions. It outperformed supervised DL methods in recovering contextual structures and adapting to practical CT scenarios. The method was validated with real experiments on a dead rat, showcasing its applicability to real-world CT scans.&#xD;Significance: PWLS-SOM reconstruction advances the field of limited-view CT reconstruction by uniting DL priors with PWLS adaptation. This approach facilitates robust and personalized imaging. The introduction of the significance score provides an efficient metric to evaluate generalization and guide the strategic optimization of DL parameters, enhancing adaptability across diverse data and practical imaging conditions.

Deep learning-based prediction of axillary pathological complete response in patients with breast cancer using longitudinal multiregional ultrasound.

Liu Y, Wang Y, Huang J, Pei S, Wang Y, Cui Y, Yan L, Yao M, Wang Y, Zhu Z, Huang C, Liu Z, Liang C, Shi J, Li Z, Pei X, Wu L

pubmed logopapersAug 27 2025
Noninvasive biomarkers that capture the longitudinal multiregional tumour burden in patients with breast cancer may improve the assessment of residual nodal disease and guide axillary surgery. Additionally, a significant barrier to the clinical translation of the current data-driven deep learning model is the lack of interpretability. This study aims to develop and validate an information shared-private (iShape) model to predict axillary pathological complete response in patients with axillary lymph node (ALN)-positive breast cancer receiving neoadjuvant therapy (NAT) by learning common and specific image representations from longitudinal primary tumour and ALN ultrasound images. A total of 1135 patients with biopsy-proven ALN-positive breast cancer who received NAT were included in this multicentre, retrospective study. The iShape was trained on a dataset of 371 patients and validated on three external validation sets (EVS1-3), with 295, 244, and 225 patients, respectively. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). The false-negative rates (FNRs) of iShape alone and in combination with sentinel lymph node biopsy (SLNB) were also evaluated. Imaging feature visualisation and RNA sequencing analysis were performed to explore the underlying basis of iShape. The iShape achieved AUCs of 0.950-0.971 for EVS 1-3, which were better than those of the clinical model and the image signatures derived from the primary tumour, longitudinal primary tumour, or ALN (P < 0.05, as per the DeLong test). The performance of iShape remained satisfactory in subgroup analyses stratified by age, menstrual status, T stage, molecular subtype, treatment regimens, and machine type (AUCs of 0.812-1.000). More importantly, the FNR of iShape was 7.7%-8.1% in the EVSs, and the FNR of SLNB decreased from 13.4% to 3.6% with the aid of iShape in patients receiving SLNB and ALN dissection. The decision-making process of iShape was explained by feature visualisation. Additionally, RNA sequencing analysis revealed that a lower deep learning score was associated with immune infiltration and tumour proliferation pathways. The iShape model demonstrated good performance for the precise quantification of ALN status in patients with ALN-positive breast cancer receiving NAT, potentially benefiting individualised decision-making, and avoiding unnecessary axillary lymph node dissection. This study was supported by (1) Noncommunicable Chronic Diseases-National Science and Technology Major Project (No. 2024ZD0531100); (2) Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006); (3) National Natural Science Foundation of China (No. 82472051, 82471947, 82271941, 82272088); (4) National Science Foundation for Young Scientists of China (No. 82402270, 82202095, 82302190); (5) Guangzhou Municipal Science and Technology Planning Project (No. 2025A04J4773, 2025A04J4774); (6) the Natural Science Foundation of Guangdong Province of China (No. 2025A1515011607); (7) Medical Scientific Research Foundation of Guangdong Province of China (No. A2024403); (8) Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); (9) Outstanding Youth Science Foundation of Yunnan Basic Research Project (No. 202401AY070001-316); (10) Innovative Research Team of Yunnan Province (No. 202505AS350013).

UltraEar: a multicentric, large-scale database combining ultra-high-resolution computed tomography and clinical data for ear diseases

Ruowei Tang, Pengfei Zhao, Xiaoguang Li, Ning Xu, Yue Cheng, Mengshi Zhang, Zhixiang Wang, Zhengyu Zhang, Hongxia Yin, Heyu Ding, Shusheng Gong, Yuhe Liu, Zhenchang Wang

arxiv logopreprintAug 27 2025
Ear diseases affect billions of people worldwide, leading to substantial health and socioeconomic burdens. Computed tomography (CT) plays a pivotal role in accurate diagnosis, treatment planning, and outcome evaluation. The objective of this study is to present the establishment and design of UltraEar Database, a large-scale, multicentric repository of isotropic 0.1 mm ultra-high-resolution CT (U-HRCT) images and associated clinical data dedicated to ear diseases. UltraEar recruits patients from 11 tertiary hospitals between October 2020 and October 2035, integrating U-HRCT images, structured CT reports, and comprehensive clinical information, including demographics, audiometric profiles, surgical records, and pathological findings. A broad spectrum of otologic disorders is covered, such as otitis media, cholesteatoma, ossicular chain malformation, temporal bone fracture, inner ear malformation, cochlear aperture stenosis, enlarged vestibular aqueduct, and sigmoid sinus bony deficiency. Standardized preprocessing pipelines have been developed for geometric calibration, image annotation, and multi-structure segmentation. All personal identifiers in DICOM headers and metadata are removed or anonymized to ensure compliance with data privacy regulation. Data collection and curation are coordinated through monthly expert panel meetings, with secure storage on an offline cloud system. UltraEar provides an unprecedented ultra-high-resolution reference atlas with both technical fidelity and clinical relevance. This resource has significant potential to advance radiological research, enable development and validation of AI algorithms, serve as an educational tool for training in otologic imaging, and support multi-institutional collaborative studies. UltraEar will be continuously updated and expanded, ensuring long-term accessibility and usability for the global otologic research community.

A Systematic Review on the Generative AI Applications in Human Medical Genomics

Anton Changalidis, Yury Barbitoff, Yulia Nasykhova, Andrey Glotov

arxiv logopreprintAug 27 2025
Although traditional statistical techniques and machine learning methods have contributed significantly to genetics and, in particular, inherited disease diagnosis, they often struggle with complex, high-dimensional data, a challenge now addressed by state-of-the-art deep learning models. Large language models (LLMs), based on transformer architectures, have excelled in tasks requiring contextual comprehension of unstructured medical data. This systematic review examines the role of LLMs in the genetic research and diagnostics of both rare and common diseases. Automated keyword-based search in PubMed, bioRxiv, medRxiv, and arXiv was conducted, targeting studies on LLM applications in diagnostics and education within genetics and removing irrelevant or outdated models. A total of 172 studies were analyzed, highlighting applications in genomic variant identification, annotation, and interpretation, as well as medical imaging advancements through vision transformers. Key findings indicate that while transformer-based models significantly advance disease and risk stratification, variant interpretation, medical imaging analysis, and report generation, major challenges persist in integrating multimodal data (genomic sequences, imaging, and clinical records) into unified and clinically robust pipelines, facing limitations in generalizability and practical implementation in clinical settings. This review provides a comprehensive classification and assessment of the current capabilities and limitations of LLMs in transforming hereditary disease diagnostics and supporting genetic education, serving as a guide to navigate this rapidly evolving field.

MedNet-PVS: A MedNeXt-Based Deep Learning Model for Automated Segmentation of Perivascular Spaces

Zhen Xuen Brandon Low, Rory Zhang, Hang Min, William Pham, Lucy Vivash, Jasmine Moses, Miranda Lynch, Karina Dorfman, Cassandra Marotta, Shaun Koh, Jacob Bunyamin, Ella Rowsthorn, Alex Jarema, Himashi Peiris, Zhaolin Chen, Sandy R. Shultz, David K. Wright, Dexiao Kong, Sharon L. Naismith, Terence J. O'Brien, Ying Xia, Meng Law, Benjamin Sinclair

arxiv logopreprintAug 27 2025
Enlarged perivascular spaces (PVS) are increasingly recognized as biomarkers of cerebral small vessel disease, Alzheimer's disease, stroke, and aging-related neurodegeneration. However, manual segmentation of PVS is time-consuming and subject to moderate inter-rater reliability, while existing automated deep learning models have moderate performance and typically fail to generalize across diverse clinical and research MRI datasets. We adapted MedNeXt-L-k5, a Transformer-inspired 3D encoder-decoder convolutional network, for automated PVS segmentation. Two models were trained: one using a homogeneous dataset of 200 T2-weighted (T2w) MRI scans from the Human Connectome Project-Aging (HCP-Aging) dataset and another using 40 heterogeneous T1-weighted (T1w) MRI volumes from seven studies across six scanners. Model performance was evaluated using internal 5-fold cross validation (5FCV) and leave-one-site-out cross validation (LOSOCV). MedNeXt-L-k5 models trained on the T2w images of the HCP-Aging dataset achieved voxel-level Dice scores of 0.88+/-0.06 (white matter, WM), comparable to the reported inter-rater reliability of that dataset, and the highest yet reported in the literature. The same models trained on the T1w images of the HCP-Aging dataset achieved a substantially lower Dice score of 0.58+/-0.09 (WM). Under LOSOCV, the model had voxel-level Dice scores of 0.38+/-0.16 (WM) and 0.35+/-0.12 (BG), and cluster-level Dice scores of 0.61+/-0.19 (WM) and 0.62+/-0.21 (BG). MedNeXt-L-k5 provides an efficient solution for automated PVS segmentation across diverse T1w and T2w MRI datasets. MedNeXt-L-k5 did not outperform the nnU-Net, indicating that the attention-based mechanisms present in transformer-inspired models to provide global context are not required for high accuracy in PVS segmentation.
Page 56 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.