Sort by:
Page 448 of 4494481 results

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Radiomics machine learning based on asymmetrically prominent cortical and deep medullary veins combined with clinical features to predict prognosis in acute ischemic stroke: a retrospective study.

Li H, Chang C, Zhou B, Lan Y, Zang P, Chen S, Qi S, Ju R, Duan Y

pubmed logopapersJan 1 2025
Acute ischemic stroke (AIS) has a poor prognosis and a high recurrence rate. Predicting the outcomes of AIS patients in the early stages of the disease is therefore important. The establishment of intracerebral collateral circulation significantly improves the survival of brain cells and the outcomes of AIS patients. However, no machine learning method has been applied to investigate the correlation between the dynamic evolution of intracerebral venous collateral circulation and AIS prognosis. Therefore, we employed a support vector machine (SVM) algorithm to analyze asymmetrically prominent cortical veins (APCVs) and deep medullary veins (DMVs) to establish a radiomic model for predicting the prognosis of AIS by combining clinical indicators. The magnetic resonance imaging (MRI) data and clinical indicators of 150 AIS patients were retrospectively analyzed. Regions of interest corresponding to the DMVs and APCVs were delineated, and least absolute shrinkage and selection operator (LASSO) regression was used to select features extracted from these regions. An APCV-DMV radiomic model was created via the SVM algorithm, and independent clinical risk factors associated with AIS were combined with the radiomic model to generate a joint model. The SVM algorithm was selected because of its proven efficacy in handling high-dimensional radiomic data compared with alternative classifiers (<i>e.g.</i>, random forest) in pilot experiments. Nine radiomic features associated with AIS patient outcomes were ultimately selected. In the internal training test set, the AUCs of the clinical, DMV-APCV radiomic and joint models were 0.816, 0.976 and 0.996, respectively. The DeLong test revealed that the predictive performance of the joint model was better than that of the individual models, with a test set AUC of 0.996, sensitivity of 0.905, and specificity of 1.000 (<i>P</i> < 0.05). Using radiomic methods, we propose a novel joint predictive model that combines the imaging histologic features of the APCV and DMV with clinical indicators. This model quantitatively characterizes the morphological and functional attributes of venous collateral circulation, elucidating its important role in accurately evaluating the prognosis of patients with AIS and providing a noninvasive and highly accurate imaging tool for early prognostic prediction.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.

Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.

Ramli Hamid MT, Ab Mumin N, Abdul Hamid S, Mohd Ariffin N, Mat Nor K, Saib E, Mohamed NA

pubmed logopapersJan 1 2025
This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

Volumetric atlas of the rat inner ear from microCT and iDISCO+ cleared temporal bones.

Cossellu D, Vivado E, Batti L, Gantar I, Pizzala R, Perin P

pubmed logopapersJan 1 2025
Volumetric atlases are an invaluable tool in neuroscience and otolaryngology, greatly aiding experiment planning and surgical interventions, as well as the interpretation of experimental and clinical data. The rat is a major animal model for hearing and balance studies, and a detailed volumetric atlas for the rat central auditory system (Waxholm) is available. However, the Waxholm rat atlas only contains a low-resolution inner ear featuring five structures. In the present work, we segmented and annotated 34 structures in the rat inner ear, yielding a detailed volumetric inner ear atlas which can be integrated with the Waxholm rat brain atlas. We performed iodine-enhanced microCT and iDISCO+-based clearing and fluorescence lightsheet microscopy imaging on a sample of rat temporal bones. Image stacks were segmented in a semiautomated way, and 34 inner ear volumes were reconstructed from five samples. Using geometrical morphometry, high-resolution segmentations obtained from lightsheet and microCT stacks were registered into the coordinate system of the Waxholm rat atlas. Cleared sample autofluorescence was used for the reconstruction of most inner ear structures, including fluid-filled compartments, nerves and sensory epithelia, blood vessels, and connective tissue structures. Image resolution allowed reconstruction of thin ducts (reuniting, saccular and endolymphatic), and the utriculoendolymphatic valve. The vestibulocochlear artery coursing through bone was found to be associated to the reuniting duct, and to be visible both in cleared and microCT samples, thus allowing to infer duct location from microCT scans. Cleared labyrinths showed minimal shape distortions, as shown by alignment with microCT and Waxholm labyrinths. However, membranous labyrinths could display variable collapse of the superior division, especially the roof of canal ampullae, whereas the inferior division (saccule and cochlea) was well preserved, with the exception of Reissner's membrane that could display ruptures in the second cochlear turn. As an example of atlas use, the volumes reconstructed from segmentations were used to separate macrophage populations from the spiral ganglion, auditory neuron dendrites, and Organ of Corti. We have reconstructed 34 structures from the rat temporal bone, which are available as both image stacks and printable 3D objects in a shared repository for download. These can be used for teaching, localizing cells or other features within the ear, modeling auditory and vestibular sensory physiology and training of automated segmentation machine learning tools.

Refining CT image analysis: Exploring adaptive fusion in U-nets for enhanced brain tissue segmentation.

Chen BC, Shen CY, Chai JW, Hwang RH, Chiang WC, Chou CH, Liu WM

pubmed logopapersJan 1 2025
Non-contrast Computed Tomography (NCCT) quickly diagnoses acute cerebral hemorrhage or infarction. However, Deep-Learning (DL) algorithms often generate false alarms (FA) beyond the cerebral region. We introduce an enhanced brain tissue segmentation method for infarction lesion segmentation (ILS). This method integrates an adaptive result fusion strategy to confine the search operation within cerebral tissue, effectively reducing FAs. By leveraging fused brain masks, DL-based ILS algorithms focus on pertinent radiomic correlations. Various U-Net models underwent rigorous training, with exploration of diverse fusion strategies. Further refinement entailed applying a 9x9 Gaussian filter with unit standard deviation followed by binarization to mitigate false positives. Performance evaluation utilized Intersection over Union (IoU) and Hausdorff Distance (HD) metrics, complemented by external validation on a subset of the COCO dataset. Our study comprised 20 ischemic stroke patients (14 males, 4 females) with an average age of 68.9 ± 11.7 years. Fusion with UNet2+ and UNet3 + yielded an IoU of 0.955 and an HD of 1.33, while fusion with U-net, UNet2 + , and UNet3 + resulted in an IoU of 0.952 and an HD of 1.61. Evaluation on the COCO dataset demonstrated an IoU of 0.463 and an HD of 584.1 for fusion with UNet2+ and UNet3 + , and an IoU of 0.453 and an HD of 728.0 for fusion with U-net, UNet2 + , and UNet3 + . Our adaptive fusion strategy significantly diminishes FAs and enhances the training efficacy of DL-based ILS algorithms, surpassing individual U-Net models. This methodology holds promise as a versatile, data-independent approach for cerebral lesion segmentation.

Enhancement of Fairness in AI for Chest X-ray Classification.

Jackson NJ, Yan C, Malin BA

pubmed logopapersJan 1 2024
The use of artificial intelligence (AI) in medicine has shown promise to improve the quality of healthcare decisions. However, AI can be biased in a manner that produces unfair predictions for certain demographic subgroups. In MIMIC-CXR, a publicly available dataset of over 300,000 chest X-ray images, diagnostic AI has been shown to have a higher false negative rate for racial minorities. We evaluated the capacity of synthetic data augmentation, oversampling, and demographic-based corrections to enhance the fairness of AI predictions. We show that adjusting unfair predictions for demographic attributes, such as race, is ineffective at improving fairness or predictive performance. However, using oversampling and synthetic data augmentation to modify disease prevalence reduced such disparities by 74.7% and 10.6%, respectively. Moreover, such fairness gains were accomplished without reduction in performance (95% CI AUC: [0.816, 0.820] versus [0.810, 0.819] versus [0.817, 0.821] for baseline, oversampling, and augmentation, respectively).

Integrating AI into Clinical Workflows: A Simulation Study on Implementing AI-aided Same-day Diagnostic Testing Following an Abnormal Screening Mammogram.

Lin Y, Hoyt AC, Manuel VG, Inkelas M, Maehara CK, Ayvaci MUS, Ahsen ME, Hsu W

pubmed logopapersJan 1 2024
Artificial intelligence (AI) shows promise in clinical tasks, yet its integration into workflows remains underexplored. This study proposes an AI-aided same-day diagnostic imaging workup to reduce recall rates following abnormal screening mammograms and alleviate patient anxiety while waiting for the diagnostic examinations. Using discrete simulation, we found minimal disruption to the workflow (a 4% reduction in daily patient volume or a 2% increase in operating time) under specific conditions: operation from 9 am to 12 pm with all radiologists managing all patient types (screenings, diagnostics, and biopsies). Costs specific to the AI-aided same-day diagnostic workup include AI software expenses and potential losses from unused pre-reserved slots for same-day diagnostic workups. These simulation findings can inform the implementation of an AI-aided same-day diagnostic workup, with future research focusing on its potential benefits, including improved patient satisfaction, reduced anxiety, lower recall rates, and shorter time to cancer diagnoses and treatment.
Page 448 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.