Sort by:
Page 4 of 3793788 results

Secure and fault tolerant cloud based framework for medical image storage and retrieval in a distributed environment.

Amaithi Rajan A, V V, M A, R PK

pubmed logopapersSep 26 2025
In the evolving field of healthcare, centralized cloud-based medical image retrieval faces challenges related to security, availability, and adversarial threats. Existing deep learning-based solutions improve retrieval but remain vulnerable to adversarial attacks and quantum threats, necessitating a shift to more secure distributed cloud solutions. This article proposes SFMedIR, a secure and fault tolerant medical image retrieval framework that contains an adversarial attack-resistant federated learning for hashcode generation, utilizing a ConvNeXt-based model to improve accuracy and generalizability. The framework integrates quantum-chaos-based encryption for security, dynamic threshold-based shadow storage for fault tolerance, and a distributed cloud architecture to mitigate single points of failure. Unlike conventional methods, this approach significantly improves security and availability in cloud-based medical image retrieval systems, providing a resilient and efficient solution for healthcare applications. The framework is validated on Brain MRI and Kidney CT datasets, achieving a 60-70% improvement in retrieval accuracy for adversarial queries and an overall 90% retrieval accuracy, outperforming existing models by 5-10%. The results demonstrate superior performance in terms of both security and retrieval efficiency, making this framework a valuable contribution to the future of secure medical image management.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.

MultiD4CAD: Multimodal Dataset composed of CT and Clinical Features for Coronary Artery Disease Analysis.

Prinzi F, Militello C, Sollami G, Toia P, La Grutta L, Vitabile S

pubmed logopapersSep 26 2025
Multimodal datasets offer valuable support for developing Clinical Decision Support Systems (CDSS), which leverage predictive models to enhance clinicians' decision-making. In this observational study, we present a dataset of suspected Coronary Artery Disease (CAD) patients - called MultiD4CAD - comprising imaging and clinical data. The imaging data obtained from Coronary Computed Tomography Angiography (CCTA) includes epicardial (EAT) and pericoronary (PAT) adipose tissue segmentations. These metabolically active fat tissues play a key role in cardiovascular diseases. In addition, clinical data include a set of biomarkers recognized as CAD risk factors. The validated EAT and PAT segmentations make the dataset suitable for training predictive models based on radiomics and deep learning architectures. The inclusion of CAD disease labels allows for its application in supervised learning algorithms to predict CAD outcomes. MultiD4CAD has revealed important correlations between imaging features, clinical biomarkers, and CAD status. The article concludes by discussing some challenges, such as classification, segmentation, radiomics, and deep training tasks, that can be investigated and validated using the proposed dataset.

Deep learning-driven contactless ECG in MRI via beat pilot tone for motion-resolved image reconstruction and heart rate monitoring.

Sun H, Ding Q, Zhong S, Zhang Z

pubmed logopapersSep 26 2025
Electrocardiogram (ECG) is crucial for synchronizing cardiovascular magnetic resonance imaging (CMRI) acquisition with the cardiac cycle and for continuous heart rate monitoring during prolonged scans. However, conventional electrode-based ECG systems in clinical MRI environments suffer from tedious setup, magnetohydrodynamic (MHD) waveform distortion, skin burn risks, and patient discomfort. This study proposes a contactless ECG measurement method in MRI to address these challenges. We integrated Beat Pilot Tone (BPT)-a contactless, high motion sensitivity, and easily integrable RF motion sensing modality-into CMRI to capture cardiac motion without direct patient contact. A deep neural network was trained to map the BPT-derived cardiac mechanical motion signals to corresponding ECG waveforms. The reconstructed ECG was evaluated against simultaneously acquired ground truth ECG through multiple metrics: Pearson correlation coefficient, relative root mean square error (RRMSE), cardiac trigger timing accuracy, and heart rate estimation error. Additionally, we performed MRI retrospective binning reconstruction using reconstructed ECG reference and evaluated image quality under both standard clinical conditions and challenging scenarios involving arrhythmias and subject motion. To examine scalability of our approach across field strength, the model pretrained on 1.5T data was applied to 3T BPT cardiac acquisitions. In optimal acquisition scenarios, the reconstructed ECG achieved a median Pearson correlation of 89% relative to the ground truth, while cardiac triggering accuracy reached 94%, and heart rate estimation error remained below 1 bpm. The quality of the reconstructed images was comparable to that of ground truth synchronization. The method exhibited a degree of adaptability to irregular heart rate patterns and subject motion, and scaled effectively across MRI systems operating at different field strengths. The proposed contactless ECG measurement method has the potential to streamline CMRI workflows, improve patient safety and comfort, mitigate MHD distortion challenges and find a robust clinical application.

Evaluating the Accuracy and Efficiency of AI-Generated Radiology Reports Based on Positive Findings-A Qualitative Assessment of AI in Radiology.

Rajmohamed RF, Chapala S, Shazahan MA, Wali P, Botchu R

pubmed logopapersSep 26 2025
With increasing imaging demands, radiologists face growing workload pressures, often resulting in delays and reduced diagnostic efficiency. Recent advances in artificial intelligence (AI) have introduced tools for automated report generation, particularly in simpler imaging modalities, such as X-rays. However, limited research has assessed AI performance in complex studies such as MRI and CT scans, where report accuracy and clinical interpretation are critical. To evaluate the performance of a semi-automated AI-based reporting platform in generating radiology reports for complex imaging studies, and to compare its accuracy, efficiency, and user confidence with the traditional dictation method. This study involved 100 imaging cases, including MRI knee (n=21), MRI lumbar spine (n=30), CT head (n=23), and CT Abdomen and Pelvis (n=26). Consultant musculoskeletal radiologists reported each case using both traditional dictation and the AI platform. The radiologist first identified and entered the key positive findings, based on which the AI system generated a full draft report. Reporting time was recorded, and both methods were evaluated on accuracy, user confidence, and overall reporting experience (rated on a scale of 1-5). Statistical analysis was conducted using two-tailed t-tests and 95% confidence intervals. AI-generated reports demonstrated significantly improved performance across all parameters. The mean reporting time reduced from 6.1 to 3.43 min (p<0.0001) with AI-assisted report generation. Accuracy improved from 3.81 to 4.65 (p<0.0001), confidence ratings increased from 3.91 to 4.67 (p<0.0001), and overall reporting experience favored using the AI platform for generating radiology reports (mean 4.7 vs. 3.69, p<0.0001). Minor formatting errors and occasional anatomical misinterpretations were observed in AI-generated reports, but could be easily corrected by the radiologist during review. The AI-assisted reporting platform significantly improved efficiency and radiologist confidence without compromising accuracy. Although the tool performs well when provided with key clinical findings, it still requires expert oversight, especially in anatomically complex reporting. These findings support the use of AI as a supportive tool in radiology practice, with a focus on data integrity, consistency, and human validation.

Hybrid Fusion Model for Effective Distinguishing Benign and Malignant Parotid Gland Tumors in Gray-Scale Ultrasonography.

Mao Y, Jiang LP, Wang JL, Chen FQ, Zhang WP, Peng XQ, Chen L, Liu ZX

pubmed logopapersSep 26 2025
To develop a hybrid fusion model-deep learning radiomics nomograms (DLRN), integrating radiomics and transfer learning for assisting sonographers differentiate benign and malignant parotid gland tumors. This study retrospectively analyzed a total of 328 patients with pathologically confirmed parotid gland tumors from two centers. Radiomics features extracted from ultrasound images were input into eight machine learning classifiers to construct Radiomics (Rad) model. Additionally, images were also input into seven transfer learning networks to construct deep transfer learning (DTL) model. The prediction probabilities from these two models were combined through decision fusion to construct a DLR model. Clinical features were further integrated with the prediction probabilities of the DLR model to develop the DLRN model. The performance of these models was evaluated using receiver operating characteristic curve analysis, calibration curve, decision curve analysis and the Hosmer-Lemeshow test. In the internal and external validation cohorts, compared with Clinic (AUC = 0.891 and 0.734), Rad (AUC = 0.809 and 0.860), DTL (AUC = 0.905 and 0.782) and DLR (AUC = 0.932 and 0.828), the DLRN model demonstrated the greatest discriminative ability (AUC = 0.931 and 0.934), showing the best discriminative power. With the assistance of DLR, the diagnostic accuracy of resident, attending and chief physician increased by 6.6%, 6.5% and 1.2%, respectively. The hybrid fusion model DLRN significantly enhances the diagnostic performance for benign and malignant tumors of the parotid gland. It can effectively assist sonographers in making more accurate diagnoses.

[Advances in the application of multimodal image fusion technique in stomatology].

Ma TY, Zhu N, Zhang Y

pubmed logopapersSep 26 2025
Within the treatment process of modern stomatology, obtaining exquisite preoperative information is the key to accurate intraoperative planning with implementation and prognostic judgment. However, traditional single mode image has obvious shortcomings, such as "monotonous contents" and "unstable measurement accuracy", which could hardly meet the diversified needs of oral patients. Multimodal medical image fusion (MMIF) technique has been introduced into the studies of stomatology in the 1990s, aiming at realizing personalized patients' data analysis through multiple fusion algorithms, which combines the advantages of multimodal medical images while laying a stable foundation for new treatment technologies. Recently artificial intelligence (AI) has significantly increased the precision and efficiency of MMIF's registration: advanced algorithms and networks have confirmed the great compatibility between AI and MMIF. This article systematically reviews the development history of the multimodal image fusion technique and its current application in stomatology, while analyzing technological progresses within the domain combined with the background of AI's rapid development, in order to provide new ideas for achieving new advancements within the field of stomatology.

Deep learning reconstruction for temporomandibular joint MRI: diagnostic interchangeability, image quality, and scan time reduction.

Jo GD, Jeon KJ, Choi YJ, Lee C, Han SS

pubmed logopapersSep 25 2025
To evaluate the diagnostic interchangeability, image quality, and scan time of deep learning (DL)-reconstructed magnetic resonance imaging (MRI) compared with conventional MRI for the temporomandibular joint (TMJ). Patients with suspected TMJ disorder underwent sagittal proton density-weighted (PDW) and T2-weighted fat-suppressed (T2W FS) MRI using both conventional and DL reconstruction protocols in a single session. Three oral radiologists independently assessed disc shape, disc position, and joint effusion. Diagnostic interchangeability for these findings was evaluated by comparing interobserver agreement, with equivalence defined as a 95% confidence interval (CI) within ±5%. Qualitative image quality (sharpness, noise, artifacts, overall) was rated on a 5-point scale. Quantitative image quality was assessed by measuring the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in the condyle, disc, and background air. Image quality scores were compared using the Wilcoxon signed-rank test, and SNR/CNR using paired t-tests. Scan times were directly compared. A total of 176 TMJs from 88 patients (mean age, 37 ± 16 years; 43 men) were analyzed. DL-reconstructed MRI demonstrated diagnostic equivalence to conventional MRI for disc shape, position, and effusion (equivalence indices < 3%; 95% CIs within ±5%). DL reconstruction significantly reduced noise in PDW and T2W FS sequences (p < 0.05) while maintaining sharpness and artifact levels. SNR and CNR were significantly improved (p < 0.05), except for disc SNR in PDW (p = 0.189). Scan time was reduced by 49.2%. DL-reconstructed TMJ MRI is diagnostically interchangeable with conventional MRI, offering improved image quality with a shorter scan time. Question Long MRI scan times in patients with temporomandibular disorders can increase pain and motion-related artifacts, often compromising image quality in diagnostic settings. Findings DL reconstruction is diagnostically interchangeable with conventional MRI for assessing disc shape, disc position, and effusion, while improving image quality and reducing scan time. Clinical relevance DL reconstruction enables faster and more tolerable TMJ MRI workflows without compromising diagnostic accuracy, facilitating broader adoption in clinical settings where long scan times and motion artifacts often limit diagnostic efficiency.

A Deep Learning-Based EffConvNeXt Model for Automatic Classification of Cystic Bronchiectasis: An Explainable AI Approach.

Tekin V, Tekinhatun M, Özçelik STA, Fırat H, Üzen H

pubmed logopapersSep 25 2025
Cystic bronchiectasis and pneumonia are respiratory conditions that significantly impact morbidity and mortality worldwide. Diagnosing these diseases accurately is crucial, as early detection can greatly improve patient outcomes. These diseases are respiratory conditions that present with overlapping features on chest X-rays (CXR), making accurate diagnosis challenging. Recent advancements in deep learning (DL) have improved diagnostic accuracy in medical imaging. This study proposes the EffConvNeXt model, a hybrid approach combining EfficientNetB1 and ConvNeXtTiny, designed to enhance classification accuracy for cystic bronchiectasis, pneumonia, and normal cases in CXRs. The model effectively balances EfficientNetB1's efficiency with ConvNeXtTiny's advanced feature extraction, allowing for better identification of complex patterns in CXR images. Additionally, the EffConvNeXt model combines EfficientNetB1 and ConvNeXtTiny, addressing limitations of each model individually: EfficientNetB1's SE blocks improve focus on critical image areas while keeping the model lightweight and fast, and ConvNeXtTiny enhances detection of subtle abnormalities, making the combined model highly effective for rapid and accurate CXR image analysis in clinical settings. For the performance analysis of the EffConvNeXt model, experimental studies were conducted using 5899 CXR images collected from Dicle University Medical Faculty. When used individually, ConvNeXtTiny achieved an accuracy rate of 97.12%, while EfficientNetB1 reached 97.79%. By combining both models, the EffConvNeXt raised the accuracy to 98.25%, showing a 0.46% improvement. With this result, the other tested DL models fell behind. These findings indicate that EffConvNeXt provides a reliable, automated solution for distinguishing cystic bronchiectasis and pneumonia, supporting clinical decision-making with enhanced diagnostic accuracy.

Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images.

Amini E, Hille G, Hürtgen J, Surov A, Saalfeld S

pubmed logopapersSep 25 2025
Acute pulmonary embolism (APE) is a common pulmonary condition that, in severe cases, can progress to right ventricular hypertrophy and failure, making it a critical health concern surpassed in severity only by myocardial infarction and sudden death. CT pulmonary angiogram (CTPA) is a standard diagnostic tool for detecting APE. However, for treatment planning and prognosis of patient outcome, an accurate assessment of individual APEs is required. Within this study, we compiled and prepared a dataset of 200 CTPA image volumes of patients with APE. We then adapted two state-of-the-art neural networks; the nnU-Net and the transformer-based VT-UNet in order to provide fully automatic APE segmentations. The nnU-Net demonstrated robust performance, achieving an average Dice similarity coefficient (DSC) of 88.25 ± 10.19% and an average 95th percentile Hausdorff distance (HD95) of 10.57 ± 34.56 mm across the validation sets in a five-fold cross-validation framework. In comparison, the VT-UNet was achieving on par accuracies with an average DSC of 87.90 ± 10.94% and a mean HD95 of 10.77 ± 34.19 mm. We applied two state-of-the-art networks for automatic APE segmentation to our compiled CTPA dataset and achieved superior experimental results compared to the current state of the art. In clinical routine, accurate APE segmentations can be used for enhanced patient prognosis and treatment planning.
Page 4 of 3793788 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.