Sort by:
Page 167 of 6486473 results

Wang ZZ, Song SM, Zhang G, Chen RQ, Zhang ZC, Liu R

pubmed logopapersSep 14 2025
Deep learning-based super-resolution (SR) reconstruction can obtain high-quality images with more detailed information. To compare multiparametric normal-resolution (NR) and SR magnetic resonance imaging (MRI) in predicting the histopathologic grade in hepatocellular carcinoma. We retrospectively analyzed a total of 826 patients from two medical centers (training 459; validation 196; test 171). T2-weighted imaging, diffusion-weighted imaging, and portal venous phases were collected. Tumor segmentations were conducted automatically by 3D U-Net. Based on generative adversarial network, we utilized 3D SR reconstruction to produce SR MRI. Radiomics models were developed and validated by XGBoost and Catboost. The predictive efficiency was demonstrated by calibration curves, decision curve analysis, area under the curve (AUC) and net reclassification index (NRI). We extracted 3045 radiomic features from both NR and SR MRI, retaining 29 and 28 features, respectively. For XGBoost models, SR MRI yielded higher AUC value than NR MRI in the validation and test cohorts (0.83 <i>vs</i> 0.79; 0.80 <i>vs</i> 0.78), respectively. Consistent trends were seen in CatBoost models: SR MRI achieved AUCs of 0.89 and 0.80 compared to NR MRI's 0.81 and 0.76. NRI indicated that the SR MRI models could improve the prediction accuracy by -1.6% to 20.9% compared to the NR MRI models. Deep learning-based SR MRI could improve the predictive performance of histopathologic grade in HCC. It may be a powerful tool for better stratification management for patients with operable HCC.

Miida S, Kamimura H, Fujiki S, Kobayashi T, Endo S, Maruyama H, Yoshida T, Watanabe Y, Kimura N, Abe H, Sakamaki A, Yokoo T, Tsukada M, Numano F, Kashimura T, Inomata T, Fuzawa Y, Hirata T, Horii Y, Ishikawa H, Nonaka H, Kamimura K, Terai S

pubmed logopapersSep 14 2025
Congestive hepatopathy, also known as nutmeg liver, is liver damage secondary to chronic heart failure (HF). Its morphological characteristics in terms of medical imaging are not defined and remain unclear. To leverage machine learning to capture imaging features of congestive hepatopathy using incidentally acquired computed tomography (CT) scans. We retrospectively analyzed 179 chronic HF patients who underwent echocardiography and CT within one year. Right HF severity was classified into three grades. Liver CT images at the paraumbilical vein level were used to develop a ResNet-based machine learning model to predict tricuspid regurgitation (TR) severity. Model accuracy was compared with that of six gastroenterology and four radiology experts. In the included patients, 120 were male (mean age: 73.1 ± 14.4 years). The accuracy of the results predicting TR severity from a single CT image for the machine learning model was significantly higher than the average accuracy of the experts. The model was found to be exceptionally reliable for predicting severe TR. Deep learning models, particularly those using ResNet architectures, can help identify morphological changes associated with TR severity, aiding in early liver dysfunction detection in patients with HF, thereby improving outcomes.

Ayhan Can Erdur, Christian Beischl, Daniel Scholz, Jiazhen Pan, Benedikt Wiestler, Daniel Rueckert, Jan C Peeken

arxiv logopreprintSep 14 2025
Missing input sequences are common in medical imaging data, posing a challenge for deep learning models reliant on complete input data. In this work, inspired by MultiMAE [2], we develop a masked autoencoder (MAE) paradigm for multi-modal, multi-task learning in 3D medical imaging with brain MRIs. Our method treats each MRI sequence as a separate input modality, leveraging a late-fusion-style transformer encoder to integrate multi-sequence information (multi-modal) and individual decoder streams for each modality for multi-task reconstruction. This pretraining strategy guides the model to learn rich representations per modality while also equipping it to handle missing inputs through cross-sequence reasoning. The result is a flexible and generalizable encoder for brain MRIs that infers missing sequences from available inputs and can be adapted to various downstream applications. We demonstrate the performance and robustness of our method against an MAE-ViT baseline in downstream segmentation and classification tasks, showing absolute improvement of $10.1$ overall Dice score and $0.46$ MCC over the baselines with missing input sequences. Our experiments demonstrate the strength of this pretraining strategy. The implementation is made available.

Jeanny Pan, Philipp Seeböck, Christoph Fürböck, Svitlana Pochepnia, Jennifer Straub, Lucian Beer, Helmut Prosch, Georg Langs

arxiv logopreprintSep 14 2025
Identifying new disease-related patterns in medical imaging data with the help of machine learning enlarges the vocabulary of recognizable findings. This supports diagnostic and prognostic assessment. However, image appearance varies not only due to biological differences, but also due to imaging technology linked to vendors, scanning- or re- construction parameters. The resulting domain shifts impedes data representation learning strategies and the discovery of biologically meaningful cluster appearances. To address these challenges, we introduce an approach to actively learn the domain shift via post-hoc rotation of the data latent space, enabling disentanglement of biological and technical factors. Results on real-world heterogeneous clinical data showcase that the learned disentangled representation leads to stable clusters representing tissue-types across different acquisition settings. Cluster consistency is improved by +19.01% (ARI), +16.85% (NMI), and +12.39% (Dice) compared to the entangled representation, outperforming four state-of-the-art harmonization methods. When using the clusters to quantify tissue composition on idiopathic pulmonary fibrosis patients, the learned profiles enhance Cox survival prediction. This indicates that the proposed label-free framework facilitates biomarker discovery in multi-center routine imaging data. Code is available on GitHub https://github.com/cirmuw/latent-space-rotation-disentanglement.

Christoph Fürböck, Paul Weiser, Branko Mitic, Philipp Seeböck, Thomas Helbich, Georg Langs

arxiv logopreprintSep 14 2025
In real world clinical environments, training and applying deep learning models on multi-modal medical imaging data often struggles with partially incomplete data. Standard approaches either discard missing samples, require imputation or repurpose dropout learning schemes, limiting robustness and generalizability. To address this, we propose a hypernetwork-based method that dynamically generates task-specific classification models conditioned on the set of available modalities. Instead of training a fixed model, a hypernetwork learns to predict the parameters of a task model adapted to available modalities, enabling training and inference on all samples, regardless of completeness. We compare this approach with (1) models trained only on complete data, (2) state of the art channel dropout methods, and (3) an imputation-based method, using artificially incomplete datasets to systematically analyze robustness to missing modalities. Results demonstrate superior adaptability of our method, outperforming state of the art approaches with an absolute increase in accuracy of up to 8% when trained on a dataset with 25% completeness (75% of training data with missing modalities). By enabling a single model to generalize across all modality configurations, our approach provides an efficient solution for real-world multi-modal medical data analysis.

Mingyuan Meng

arxiv logopreprintSep 14 2025
Medical Image Computing (MIC) is a broad research topic covering both pixel-wise (e.g., segmentation, registration) and image-wise (e.g., classification, regression) vision tasks. Effective analysis demands models that capture both global long-range context and local subtle visual characteristics, necessitating fine-grained long-range visual dependency modeling. Compared to Convolutional Neural Networks (CNNs) that are limited by intrinsic locality, transformers excel at long-range modeling; however, due to the high computational loads of self-attention, transformers typically cannot process high-resolution features (e.g., full-scale image features before downsampling or patch embedding) and thus face difficulties in modeling fine-grained dependency among subtle medical image details. Concurrently, Multi-layer Perceptron (MLP)-based visual models are recognized as computation/memory-efficient alternatives in modeling long-range visual dependency but have yet to be widely investigated in the MIC community. This doctoral research advances deep learning-based MIC by investigating effective long-range visual dependency modeling. It first presents innovative use of transformers for both pixel- and image-wise medical vision tasks. The focus then shifts to MLPs, pioneeringly developing MLP-based visual models to capture fine-grained long-range visual dependency in medical images. Extensive experiments confirm the critical role of long-range dependency modeling in MIC and reveal a key finding: MLPs provide feasibility in modeling finer-grained long-range dependency among higher-resolution medical features containing enriched anatomical/pathological details. This finding establishes MLPs as a superior paradigm over transformers/CNNs, consistently enhancing performance across various medical vision tasks and paving the way for next-generation medical vision backbones.

Usman M, Rehman A, Ur Rehman A, Shahid A, Khan TM, Razzak I, Chung M, Shin YG

pubmed logopapersSep 14 2025
Accurate lung nodule segmentation is crucial for early-stage lung cancer diagnosis, as it can substantially enhance patient survival rates. Computed tomography (CT) images are widely employed for early diagnosis in lung nodule analysis. However, the heterogeneity of lung nodules, size diversity, and the complexity of the surrounding environment pose challenges for developing robust nodule segmentation methods. In this study, we propose an efficient end-to-end framework, the Multi-Encoder Self-Adaptive Hard Attention Network (MESAHA-Net), which consists of three encoding paths, an attention block, and a decoder block that assimilates CT slice patches with both forward and backward maximum intensity projection (MIP) images. This synergy affords a profound contextual understanding of lung nodules and also results in a deluge of features. To manage the profusion of features generated, we incorporate a self-adaptive hard attention mechanism guided by region of interest (ROI) masks centered on nodular regions, which MESAHA-Net autonomously produces. The network sequentially undertakes slice-by-slice segmentation, emphasizing nodule regions to produce precise three-dimensional (3D) segmentation. The proposed framework has been comprehensively evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, the largest publicly available dataset for lung nodule segmentation. The results demonstrate that our approach is highly robust across various lung nodule types, outperforming previous state-of-the-art techniques in terms of segmentation performance and computational complexity, making it suitable for real-time clinical implementation of artificial intelligence (AI)-driven diagnostic tools.

Muhammad F, Weber Ii KA, Haynes G, Villeneuve L, Smith L, Baha A, Hameed S, Khan AF, Dhaher Y, Parrish T, Rohan M, Smith ZA

pubmed logopapersSep 14 2025
Degenerative cervical myelopathy (DCM) is the leading cause of spinal cord disorder in adults, yet conventional MRI cannot detect microstructural damage beyond the compression site. Current application of magnetization transfer ratio (MTR), while promising, suffer from limited standardization, operator dependence, and unclear added value over traditional metrics such as cross-sectional area (CSA). To address these limitations, we utilized our semi-automated analysis pipeline built on the Spinal Cord Toolbox (SCT) platform to automate MTR extraction. Our method integrates deep learning-based convolutional neural networks (CNNs) for spinal cord segmentation, vertebral labeling via the global curve optimization algorithm and PAM50 template registration to enable automated MTR extraction. Using the Generic Spine Protocol, we acquired 3T T2w- and MT-MRI images from 30 patients with DCM and 15 age-matched healthy controls (HC). We computed MTR and CSA at the maximal compression level (C5-C6) and a distant, uncompressed region (C2-C3). We extracted regional and tract-specific MTR using probabilistic maps in template space. Diagnostic accuracy was assessed with ROC analysis, and k-means clustering reveal patients subgroups based on neurological impairments. Correlation analysis assessed associations between MTR measures and DCM deficits. Patients with DCM showed significant MTR reductions in both compressed and uncompressed regions (p < 0.05). At C2-C3, MTR outperformed CSA (AUC 0.74 vs 0.69) in detecting spinal cord pathology. Tract-specific MTR were correlated with dexterity, grip strength, and balance deficits. Our reproducible, computationally robust pipeline links microstructural injury to clinical outcomes in DCM and provides a scalable framework for multi-site quantitative MRI analysis of the spinal cord.

Hull G

pubmed logopapersSep 14 2025
Enthusiasm about the use of artificial intelligence (AI) in medicine has been tempered by concern that algorithmic systems can be unfairly biased against racially minoritized populations. This article uses work on racial disparities in knee osteoarthritis diagnoses to underline that achieving justice in the use of AI in medical imaging requires attention to the entire sociotechnical system within which it operates, rather than isolated properties of algorithms. Using AI to make current diagnostic procedures more efficient risks entrenching existing disparities; a recent algorithm points to some of the problems in current procedures while highlighting systemic normative issues that need to be addressed while designing further AI systems. The article thus contributes to a literature arguing that bias and fairness issues in AI be considered as aspects of structural inequality and injustice and to highlighting ways that AI can be helpful in making progress on these.

Zhou Z, Inoue A, Cox CW, McCollough CH, Yu L

pubmed logopapersSep 13 2025
To develop a volume of interest (VOI) imaging technique in multi-detector-row helical CT to reduce radiation dose or improve image quality within the VOI. A deep-learning method based on a residual U-Net architecture, named VOI-Net, was developed to correct truncation artifacts in VOI helical CT. Three patient cases, a chest CT of interstitial lung disease and 2 abdominopelvic CT of liver tumour, were used for evaluation through simulation. VOI-Net effectively corrected truncation artifacts (root mean square error [RMSE] of 5.97 ± 2.98 Hounsfield Units [HU] for chest, 3.12 ± 1.93 HU, and 3.71 ± 1.87 HU for liver). Radiation dose was reduced by 71% without sacrificing image quality within a 10-cm diameter VOI, compared to a full scan field of view (FOV) of 50 cm. With the same total energy deposited as in a full FOV scan, image quality within the VOI matched that at 350% higher radiation dose. A radiologist confirmed improved lesion conspicuity and visibility of small linear reticulations associated with ground-glass opacity and liver tumour. Focusing radiation on the VOI and using VOI-Net in a helical scan, total radiation can be reduced or higher image quality equivalent to those at higher doses in standard full FOV scan can be achieved within the VOI. A targeted helical VOI imaging technique enabled by a deep-learning-based artifact correction method improves image quality within the VOI without increasing radiation dose.
Page 167 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.