Elsevier, Expert System with Application journal
The global rise in out-of-hospital cardiac arrests underscores the importance of cardiopulmonary resuscitation (CPR) training. However, the high cost of feedback-equipped manikins limits widespread CPR education. To address this, we propose a deep learning solution that transforms smartphone-captured chest compression videos into images for feedback. This model assesses four key CPR quality indicators: compression count, depth, complete release, and hand positioning. By using composite-image evaluation, we simplify video processing and achieve effective performance. This cost-effective approach not only broadens CPR educational opportunities but also aims to enhance survival rates for cardiac arrest patients.
HQC Components Estimator is a software that estimates High-Quality CPR (HQC) components from a CPR video that lasts more than 30 seconds.
Here's how to get this software up and running on your local machine.
The software runs on Python version 3.8
CUDA must be installed for GPU environment support. You can confirm this by running the following Python code:
import torch
print(torch.cuda.is_available())
This should return True.
- Clone the repository to your local machine.
git clone https://github.com/seongjiko/CPR-estimator.git
- Navigate into the project directory and install the dependencies using pip.
conda env create -f requirements.yaml
After successful installation of all dependencies, run the software by navigating to the project directory and executing the main Python script.
python main.py
Upload your 30-second CPR video by either dragging and dropping the file onto the interface or clicking the 'Select File' button. Videos can be recorded with any camera, including smartphones, but please ensure the camera remains stationary during recording. After uploading the video, click on the 'Start Analysis' button to initiate the estimation process. The results will be displayed upon completion.
python main_real_time.py
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1F1A1060211) and in part by the National Research Foundation of Korea (NRF) funded by the Korean Government (MSIT) under Grant NRF-2022R1A4A1033600.