Sort by:
Page 167 of 1691682 results

Designing a computer-assisted diagnosis system for cardiomegaly detection and radiology report generation.

Zhu T, Xu K, Son W, Linton-Reid K, Boubnovski-Martell M, Grech-Sollars M, Lain AD, Posma JM

pubmed logopapersMay 1 2025
Chest X-ray (CXR) is a diagnostic tool for cardiothoracic assessment. They make up 50% of all diagnostic imaging tests. With hundreds of images examined every day, radiologists can suffer from fatigue. This fatigue may reduce diagnostic accuracy and slow down report generation. We describe a prototype computer-assisted diagnosis (CAD) pipeline employing computer vision (CV) and Natural Language Processing (NLP). It was trained and evaluated on the publicly available MIMIC-CXR dataset. We perform image quality assessment, view labelling, and segmentation-based cardiomegaly severity classification. We use the output of the severity classification for large language model-based report generation. Four board-certified radiologists assessed the output accuracy of our CAD pipeline. Across the dataset composed of 377,100 CXR images and 227,827 free-text radiology reports, our system identified 0.18% of cases with mixed-sex mentions, 0.02% of poor quality images (F1 = 0.81), and 0.28% of wrongly labelled views (accuracy 99.4%). We assigned views for 4.18% of images which have unlabelled views. Our binary cardiomegaly classification model has 95.2% accuracy. The inter-radiologist agreement on evaluating the generated report's semantics and correctness for radiologist-MIMIC is 0.62 (strict agreement) and 0.85 (relaxed agreement) similar to the radiologist-CAD agreement of 0.55 (strict) and 0.93 (relaxed). Our work found and corrected several incorrect or missing metadata annotations for the MIMIC-CXR dataset. The performance of our CAD system suggests performance on par with human radiologists. Future improvements revolve around improved text generation and the development of CV tools for other diseases.

Auxiliary Diagnosis of Pulmonary Nodules' Benignancy and Malignancy Based on Machine Learning: A Retrospective Study.

Wang W, Yang B, Wu H, Che H, Tong Y, Zhang B, Liu H, Chen Y

pubmed logopapersJan 1 2025
Lung cancer, one of the most lethal malignancies globally, often presents insidiously as pulmonary nodules. Its nonspecific clinical presentation and heterogeneous imaging characteristics hinder accurate differentiation between benign and malignant lesions, while biopsy's invasiveness and procedural constraints underscore the critical need for non-invasive early diagnostic approaches. In this retrospective study, we analyzed outpatient and inpatient records from the First Medical Center of Chinese PLA General Hospital between 2011 and 2021, focusing on pulmonary nodules measuring 5-30mm on CT scans without overt signs of malignancy. Pathological examination served as the reference standard. Comparative experiments evaluated SVM, RF, XGBoost, FNN, and Atten_FNN using five-fold cross-validation to assess AUC, sensitivity, and specificity. The dataset was split 70%/30%, and stratified five-fold cross-validation was applied to the training set. The optimal model was interpreted with SHAP to identify the most influential predictive features. This study enrolled 3355 patients, including 1156 with benign and 2199 with malignant pulmonary nodules. The Atten_FNN model demonstrated superior performance in five-fold cross-validation, achieving an AUC of 0.82, accuracy of 0.75, sensitivity of 0.77, and F1 score of 0.80. SHAP analysis revealed key predictive factors: demographic variables (age, sex, BMI), CT-derived features (maximum nodule diameter, morphology, density, calcification, ground-glass opacity), and laboratory biomarkers (neuroendocrine markers, carcinoembryonic antigen). This study integrates electronic medical records and pathology data to predict pulmonary nodule malignancy using machine/deep learning models. SHAP-based interpretability analysis uncovered key clinical determinants. Acknowledging limitations in cross-center generalizability, we propose the development of a multimodal diagnostic systems that combines CT imaging and radiomics, to be validated in multi-center prospective cohorts to facilitate clinical translation. This framework establishes a novel paradigm for early precision diagnosis of lung cancer.

Principles for Developing a Large-Scale Point-of-Care Ultrasound Education Program: Insights from a Tertiary University Medical Center in Israel.

Dayan RR, Karni O, Shitrit IB, Gaufberg R, Ilan K, Fuchs L

pubmed logopapersJan 1 2025
Point-of-care ultrasound (POCUS) has transformed bedside diagnostics, yet its operator-dependent nature and lack of structured training remain significant barriers. To address these challenges, Ben Gurion University (BGU) developed a longitudinal six-year POCUS curriculum, emphasizing early integration, competency-based training, and scalable educational strategies to enhance medical education and patient care. To implement a structured and scalable POCUS curriculum that progressively builds technical proficiency, clinical judgment, and diagnostic accuracy, ensuring medical students effectively integrate POCUS into clinical practice. The curriculum incorporates hands-on training, self-directed learning, a structured spiral approach, and peer-led instruction. Early exposure in physics and anatomy courses establishes a foundation, progressing to bedside applications in clinical years. Advanced technologies, including AI-driven feedback and telemedicine, enhance skill retention and address faculty shortages by providing scalable solutions for ongoing assessment and feedback. Since its implementation in 2014, the program has trained hundreds of students, with longitudinal proficiency data from over 700 students. Internal studies have demonstrated that self-directed learning modules match or exceed in-person instruction for ultrasound skill acquisition, AI-driven feedback enhances image acquisition, and early clinical integration of POCUS positively influences patient care. Preliminary findings suggest that telemedicine-based instructor feedback improves cardiac ultrasound proficiency over time, and AI-assisted probe manipulation and self-learning with ultrasound simulators may further optimize training without requiring in-person instruction. A structured longitudinal approach ensures progressive skill acquisition while addressing faculty shortages and training limitations. Cost-effective strategies, such as peer-led instruction, AI feedback, and telemedicine, support skill development and sustainability. Emphasizing clinical integration ensures students learn to use POCUS as a targeted diagnostic adjunct rather than a broad screening tool, reinforcing its role as an essential skill in modern medical education.

MRI based early Temporal Lobe Epilepsy detection using DGWO based optimized HAETN and Fuzzy-AAL Segmentation Framework (FASF).

Khan H, Alutaibi AI, Tejani GG, Sharma SK, Khan AR, Ahmad F, Mousavirad SJ

pubmed logopapersJan 1 2025
This work aims to promote early and accurate diagnosis of Temporal Lobe Epilepsy (TLE) by developing state-of-the-art deep learning techniques, with the goal of minimizing the consequences of epilepsy on individuals and society. Current approaches for TLE detection have drawbacks, including applicability to particular MRI sequences, moderate ability to determine the side of the onset zones, and weak cross-validation with different patient groups, which hampers their practical use. To overcome these difficulties, a new Hybrid Attention-Enhanced Transformer Network (HAETN) is introduced for early TLE diagnosis. This approach uses newly developed Fuzzy-AAL Segmentation Framework (FASF) which is a combination of Fuzzy Possibilistic C-Means (FPCM) algorithm for segmentation of tissue and AAL labelling for labelling of tissues. Furthermore, an effective feature selection method is proposed using the Dipper- grey wolf optimization (DGWO) algorithm to improve the performance of the proposed model. The performance of the proposed method is thoroughly assessed by accuracy, sensitivity, and F1-score. The performance of the suggested approach is evaluated on the Temporal Lobe Epilepsy-UNAM MRI Dataset, where it attains an accuracy of 98.61%, a sensitivity of 99.83%, and F1-score of 99.82%, indicating its efficiency and applicability in clinical practice.

3D-MRI brain glioma intelligent segmentation based on improved 3D U-net network.

Wang T, Wu T, Yang D, Xu Y, Lv D, Jiang T, Wang H, Chen Q, Xu S, Yan Y, Lin B

pubmed logopapersJan 1 2025
To enhance glioma segmentation, a 3D-MRI intelligent glioma segmentation method based on deep learning is introduced. This method offers significant guidance for medical diagnosis, grading, and treatment strategy selection. Glioma case data were sourced from the BraTS2023 public dataset. Firstly, we preprocess the dataset, including 3D clipping, resampling, artifact elimination and normalization. Secondly, in order to enhance the perception ability of the network to different scale features, we introduce the space pyramid pool module. Then, by making the model focus on glioma details and suppressing irrelevant background information, we propose a multi-scale fusion attention mechanism; And finally, to address class imbalance and enhance learning of misclassified voxels, a combination of Dice and Focal loss functions was employed, creating a loss function, this method not only maintains the accuracy of segmentation, It also improves the recognition of challenge samples, thus improving the accuracy and generalization of the model in glioma segmentation. Experimental findings reveal that the enhanced 3D U-Net network model stabilizes training loss at 0.1 after 150 training iterations. The refined model demonstrates superior performance with the highest DSC, Recall, and Precision values of 0.7512, 0.7064, and 0.77451, respectively. In Whole Tumor (WT) segmentation, the Dice Similarity Coefficient (DSC), Recall, and Precision scores are 0.9168, 0.9426, and 0.9375, respectively. For Core Tumor (TC) segmentation, these scores are 0.8954, 0.9014, and 0.9369, respectively. In Enhanced Tumor (ET) segmentation, the method achieves DSC, Recall, and Precision values of 0.8674, 0.9045, and 0.9011, respectively. The DSC, Recall, and Precision indices in the WT, TC, and ET segments using this method are the highest recorded, significantly enhancing glioma segmentation. This improvement bolsters the accuracy and reliability of diagnoses, ultimately providing a scientific foundation for clinical diagnosis and treatment.

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.

RRFNet: A free-anchor brain tumor detection and classification network based on reparameterization technology.

Liu W, Guo X

pubmed logopapersJan 1 2025
Advancements in medical imaging technology have facilitated the acquisition of high-quality brain images through computed tomography (CT) or magnetic resonance imaging (MRI), enabling professional brain specialists to diagnose brain tumors more effectively. However, manual diagnosis is time-consuming, which has led to the growing importance of automatic detection and classification through brain imaging. Conventional object detection models for brain tumor detection face limitations in brain tumor detection owing to the significant differences between medical images and natural scene images, as well as challenges such as complex backgrounds, noise interference, and blurred boundaries between cancerous and normal tissues. This study investigates the application of deep learning to brain tumor detection, analyzing the effect of three factors, the number of model parameters, input data batch size, and the use of anchor boxes, on detection performance. Experimental results reveal that an excessive number of model parameters or the use of anchor boxes may reduce detection accuracy. However, increasing the number of brain tumor samples improves detection performance. This study, introduces a backbone network built using RepConv and RepC3, along with FGConcat feature map splicing module to optimize the brain tumor detection model. The experimental results show that the proposed RepConv-RepC3-FGConcat Network (RRFNet) can learn underlying semantic information about brain tumors during training stage, while maintaining a low number of parameters during inference, which improves the speed of brain tumor detection. Compared with YOLOv8, RRFNet achieved a higher accuracy in brain tumor detection, with a mAP value of 79.2%. This optimized approach enhances both accuracy and efficiency, which is essential in clinical settings where time and precision are critical.

Enhancing Disease Detection in Radiology Reports Through Fine-tuning Lightweight LLM on Weak Labels.

Wei Y, Wang X, Ong H, Zhou Y, Flanders A, Shih G, Peng Y

pubmed logopapersJan 1 2025
Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4-o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential offine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.

Volumetric atlas of the rat inner ear from microCT and iDISCO+ cleared temporal bones.

Cossellu D, Vivado E, Batti L, Gantar I, Pizzala R, Perin P

pubmed logopapersJan 1 2025
Volumetric atlases are an invaluable tool in neuroscience and otolaryngology, greatly aiding experiment planning and surgical interventions, as well as the interpretation of experimental and clinical data. The rat is a major animal model for hearing and balance studies, and a detailed volumetric atlas for the rat central auditory system (Waxholm) is available. However, the Waxholm rat atlas only contains a low-resolution inner ear featuring five structures. In the present work, we segmented and annotated 34 structures in the rat inner ear, yielding a detailed volumetric inner ear atlas which can be integrated with the Waxholm rat brain atlas. We performed iodine-enhanced microCT and iDISCO+-based clearing and fluorescence lightsheet microscopy imaging on a sample of rat temporal bones. Image stacks were segmented in a semiautomated way, and 34 inner ear volumes were reconstructed from five samples. Using geometrical morphometry, high-resolution segmentations obtained from lightsheet and microCT stacks were registered into the coordinate system of the Waxholm rat atlas. Cleared sample autofluorescence was used for the reconstruction of most inner ear structures, including fluid-filled compartments, nerves and sensory epithelia, blood vessels, and connective tissue structures. Image resolution allowed reconstruction of thin ducts (reuniting, saccular and endolymphatic), and the utriculoendolymphatic valve. The vestibulocochlear artery coursing through bone was found to be associated to the reuniting duct, and to be visible both in cleared and microCT samples, thus allowing to infer duct location from microCT scans. Cleared labyrinths showed minimal shape distortions, as shown by alignment with microCT and Waxholm labyrinths. However, membranous labyrinths could display variable collapse of the superior division, especially the roof of canal ampullae, whereas the inferior division (saccule and cochlea) was well preserved, with the exception of Reissner's membrane that could display ruptures in the second cochlear turn. As an example of atlas use, the volumes reconstructed from segmentations were used to separate macrophage populations from the spiral ganglion, auditory neuron dendrites, and Organ of Corti. We have reconstructed 34 structures from the rat temporal bone, which are available as both image stacks and printable 3D objects in a shared repository for download. These can be used for teaching, localizing cells or other features within the ear, modeling auditory and vestibular sensory physiology and training of automated segmentation machine learning tools.

AI-Assisted 3D Planning of CT Parameters for Personalized Femoral Prosthesis Selection in Total Hip Arthroplasty.

Yang TJ, Qian W

pubmed logopapersJan 1 2025
To investigate the efficacy of CT measurement parameters combined with AI-assisted 3D planning for personalized femoral prosthesis selection in total hip arthroplasty (THA). A retrospective analysis was conducted on clinical data from 247 patients with unilateral hip or knee joint disorders treated at Renmin Hospital of Hubei University of Medicine between April 2021 and February 2024. All patients underwent preoperative full-pelvis and bilateral full-length femoral CT scans. The raw CT data were imported into Mimics 19.0 software to reconstruct a three-dimensional (3D) model of the healthy femur. Using 3-matic Research 11.0 software, the femoral head rotation center was located, and parameters including femoral head diameter (FHD), femoral neck length (FNL), femoral neck-shaft angle (FNSA), femoral offset (FO), femoral neck anteversion angle (FNAA), tip-apex distance (TAD), and tip-apex angle (TAA) were measured. AI-assisted THA 3D planning system AIJOINT V1.0.0.0 software was used for preoperative planning and design, enabling personalized selection of femoral prostheses with varying neck-shaft angles and surgical simulation. Groups were compared by gender, age, and parameters. ROC curves evaluated prediction efficacy. Females exhibited smaller FHD, FNL, FO, TAD, TAA but larger FNSA/FNAA vs males (P<0.05). Patients >65 years had higher FO, TAD, TAA (P<0.05). TAD-TAA correlation was strong (r=0.954), while FNSA negatively correlated with TAD/TAA (r=-0.773/-0.701). ROC analysis demonstrated high predictive accuracy: TAD (AUC=0.891, sensitivity=91.7%, specificity=87.6%) and TAA (AUC=0.882, sensitivity=100%, specificity=88.8%). CT parameters (TAA, TAD, FNSA, FO) are interrelated and effective predictors for femoral prosthesis selection. Integration with AI-assisted planning optimizes personalized THA, reducing biomechanical mismatch risks.
Page 167 of 1691682 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.