Sort by:
Page 130 of 1321316 results

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

Volumetric atlas of the rat inner ear from microCT and iDISCO+ cleared temporal bones.

Cossellu D, Vivado E, Batti L, Gantar I, Pizzala R, Perin P

pubmed logopapersJan 1 2025
Volumetric atlases are an invaluable tool in neuroscience and otolaryngology, greatly aiding experiment planning and surgical interventions, as well as the interpretation of experimental and clinical data. The rat is a major animal model for hearing and balance studies, and a detailed volumetric atlas for the rat central auditory system (Waxholm) is available. However, the Waxholm rat atlas only contains a low-resolution inner ear featuring five structures. In the present work, we segmented and annotated 34 structures in the rat inner ear, yielding a detailed volumetric inner ear atlas which can be integrated with the Waxholm rat brain atlas. We performed iodine-enhanced microCT and iDISCO+-based clearing and fluorescence lightsheet microscopy imaging on a sample of rat temporal bones. Image stacks were segmented in a semiautomated way, and 34 inner ear volumes were reconstructed from five samples. Using geometrical morphometry, high-resolution segmentations obtained from lightsheet and microCT stacks were registered into the coordinate system of the Waxholm rat atlas. Cleared sample autofluorescence was used for the reconstruction of most inner ear structures, including fluid-filled compartments, nerves and sensory epithelia, blood vessels, and connective tissue structures. Image resolution allowed reconstruction of thin ducts (reuniting, saccular and endolymphatic), and the utriculoendolymphatic valve. The vestibulocochlear artery coursing through bone was found to be associated to the reuniting duct, and to be visible both in cleared and microCT samples, thus allowing to infer duct location from microCT scans. Cleared labyrinths showed minimal shape distortions, as shown by alignment with microCT and Waxholm labyrinths. However, membranous labyrinths could display variable collapse of the superior division, especially the roof of canal ampullae, whereas the inferior division (saccule and cochlea) was well preserved, with the exception of Reissner's membrane that could display ruptures in the second cochlear turn. As an example of atlas use, the volumes reconstructed from segmentations were used to separate macrophage populations from the spiral ganglion, auditory neuron dendrites, and Organ of Corti. We have reconstructed 34 structures from the rat temporal bone, which are available as both image stacks and printable 3D objects in a shared repository for download. These can be used for teaching, localizing cells or other features within the ear, modeling auditory and vestibular sensory physiology and training of automated segmentation machine learning tools.

Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics.

Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, Cai Y

pubmed logopapersJan 1 2025
Thyroid nodule, as a common clinical endocrine disease, has become increasingly prevalent worldwide. Ultrasound, as the premier method of thyroid imaging, plays an important role in accurately diagnosing and managing thyroid nodules. However, there is a high degree of inter- and intra-observer variability in image interpretation due to the different knowledge and experience of sonographers who have huge ultrasound examination tasks everyday. Artificial intelligence based on computer-aided diagnosis technology maybe improve the accuracy and time efficiency of thyroid nodules diagnosis. This study introduced an artificial intelligence software called SW-TH01/II to evaluate ultrasound image characteristics of thyroid nodules including echogenicity, shape, border, margin, and calcification. We included 225 ultrasound images from two hospitals in Shanghai, respectively. The sonographers and software performed characteristics analysis on the same group of images. We analyzed the consistency of the two results and used the sonographers' results as the gold standard to evaluate the accuracy of SW-TH01/II. A total of 449 images were included in the statistical analysis. For the seven indicators, the proportions of agreement between SW-TH01/II and sonographers' analysis results were all greater than 0.8. For the echogenicity (with very hypoechoic), aspect ratio and margin, the kappa coefficient between the two methods were above 0.75 (P < 0.001). The kappa coefficients of echogenicity (echotexture and echogenicity level), border and calcification between the two methods were above 0.6 (P < 0.001). The median time it takes for software and sonographers to interpret an image were 3 (2, 3) seconds and 26.5 (21.17, 34.33) seconds, respectively, and the difference were statistically significant (z = -18.36, P < 0.001). SW-TH01/II has a high degree of accuracy and great time efficiency benefits in judging the characteristics of thyroid nodule. It can provide more objective results and improve the efficiency of ultrasound examination. SW-TH01/II can be used to assist the sonographers in characterizing the thyroid nodule ultrasound images.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Recognition of flight cadets brain functional magnetic resonance imaging data based on machine learning analysis.

Ye L, Weng S, Yan D, Ma S, Chen X

pubmed logopapersJan 1 2025
The rapid advancement of the civil aviation industry has attracted significant attention to research on pilots. However, the brain changes experienced by flight cadets following their training remain, to some extent, an unexplored territory compared to those of the general population. The aim of this study was to examine the impact of flight training on brain function by employing machine learning(ML) techniques. We collected resting-state functional magnetic resonance imaging (resting-state fMRI) data from 79 flight cadets and ground program cadets, extracting blood oxygenation level-dependent (BOLD) signal, amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and functional connectivity (FC) metrics as feature inputs for ML models. After conducting feature selection using a two-sample t-test, we established various ML classification models, including Extreme Gradient Boosting (XGBoost), Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB). Comparative analysis of the model results revealed that the LR classifier based on BOLD signals could accurately distinguish flight cadets from the general population, achieving an AUC of 83.75% and an accuracy of 0.93. Furthermore, an analysis of the features contributing significantly to the ML classification models indicated that these features were predominantly located in brain regions associated with auditory-visual processing, motor function, emotional regulation, and cognition, primarily within the Default Mode Network (DMN), Visual Network (VN), and SomatoMotor Network (SMN). These findings suggest that flight-trained cadets may exhibit enhanced functional dynamics and cognitive flexibility.

Cervical vertebral body segmentation in X-ray and magnetic resonance imaging based on YOLO-UNet: Automatic segmentation approach and available tool.

Wang H, Lu J, Yang S, Xiao Y, He L, Dou Z, Zhao W, Yang L

pubmed logopapersJan 1 2025
Cervical spine disorders are becoming increasingly common, particularly among sedentary populations. The accurate segmentation of cervical vertebrae is critical for diagnostic and research applications. Traditional segmentation methods are limited in terms of precision and applicability across imaging modalities. The aim of this study is to develop and evaluate a fully automatic segmentation method and a user-friendly tool for detecting cervical vertebral body using a combined neural network model based on the YOLOv11 and U-Net3 + models. A dataset of X-ray and magnetic resonance imaging (MRI) images was collected, enhanced, and annotated to include 2136 X-ray images and 2184 MRI images. The proposed YOLO-UNet ensemble model was trained and compared with four other groups of image extraction models, including YOLOv11, DeepLabV3+, U-Net3 + for direct image segmentation, and the YOLO-DeepLab network. The evaluation metrics included the Dice coefficient, Hausdorff distance, intersection over union, positive predictive value, and sensitivity. The YOLO-UNet model combined the advantages of the YOLO and U-Net models and demonstrated excellent vertebral body segmentation capabilities on both X-ray and MRI datasets, which were closer to the ground truth images. Compared with other models, it achieved greater accuracy and a more accurate depiction of the vertebral body shape, demonstrated better versatility, and exhibited superior performance across all evaluation indicators. The YOLO-UNet network model provided a robust and versatile solution for cervical vertebral body segmentation, demonstrating excellent accuracy and adaptability across imaging modalities on both X-ray and MRI datasets. The accompanying user-friendly tool enhanced usability, making it accessible to both clinical and research users. In this study, the challenge of large-scale medical annotation tasks was addressed, thereby reducing project costs and supporting advancements in medical information technology and clinical research.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Metal artifact reduction combined with deep learning image reconstruction algorithm for CT image quality optimization: a phantom study.

Zou H, Wang Z, Guo M, Peng K, Zhou J, Zhou L, Fan B

pubmed logopapersJan 1 2025
Aiming to evaluate the effects of the smart metal artifact reduction (MAR) algorithm and combinations of various scanning parameters, including radiation dose levels, tube voltage, and reconstruction algorithms, on metal artifact reduction and overall image quality, to identify the optimal protocol for clinical application. A phantom with a pacemaker was examined using standard dose (effective dose (ED): 3 mSv) and low dose (ED: 0.5 mSv), with three scan voltages (70, 100, and 120 kVp) selected for each dose. Raw data were reconstructed using 50% adaptive statistical iterative reconstruction-V (ASIR-V), ASIR-V with MAR, high-strength deep learning image reconstruction (DLIR-H), and DLIR-H with MAR. Quantitative analyses (artifact index (AI), noise, signal-to-noise ratio (SNR) of artifact-impaired pulmonary nodules (PNs), and noise power spectrum (NPS) of artifact-free regions) and qualitative evaluation were performed. Quantitatively, the deep learning image recognition (DLIR) algorithm or high tube voltages exhibited lower noise compared to the ASIR-V or low tube voltages (<i>p</i> < 0.001). AI of images with MAR or high tube voltages was significantly lower than that of images without MAR or low tube voltages (<i>p</i> < 0.001). No significant difference was observed in AI between low-dose images with 120 kVp DLIR-H MAR and standard-dose images with 70 kVp ASIR-V MAR (<i>p</i> = 0.143). Only the 70 kVp 3 mSv protocol demonstrated statistically significant differences in SNR for artifact-impaired PNs (<i>p</i> = 0.041). The f<sub>peak</sub> and f<sub>avg</sub> values were similar across various scenarios, indicating that the MAR algorithm did not alter the image texture in artifact-free regions. The qualitative results of the extent of metal artifacts, the confidence in diagnosing artifact-impaired PNs, and the overall image quality were generally consistent with the quantitative results. The MAR algorithm combined with DLIR-H can reduce metal artifacts and enhance the overall image quality, particularly at high kVp tube voltages.
Page 130 of 1321316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.