Skip to content

bittnerma/Direct3DKinematicEstimation

Repository files navigation

Towards single camera human 3D-kinematics

This repository is currently under construction

Installation

  1. Requirement

     Python 3.8.0 
     PyTorch 1.11.0
     OpenSim 4.3+        
    
  2. Python package

    Clone this repo and run the following:

     conda env create -f environment_setup.yml
    

    Activate the environment using

     conda activate d3ke
    
  3. Activate GPU support If you have an Nvidia GPU you can enable GPU acceleration to process the data faster. Enable it by running these commands:

         pip uninstall torch
         conda install cuda -c nvidia
         pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

    You can check if you have GPU enabled using

         import torch
         COMP_DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
         print("compute device: ", COMP_DEVICE)
  4. OpenSim 4.3

    1. Download and Install OpenSim

    2. (On Windows)Install python API

      • In installation_folder/OpenSim 4.x/sdk/Python, run

          python setup_win_python38.py
        
          python -m pip install .
        

      You might experience issues when importing the OpenSim library, namely the error

              ImportError: DLL load failed while importing _simbody: The specified module could not be found.
      

      A quick fix is to include the path /path/to/OpenSim 4.x/bin in your windows environment variables (either user or system is fine). If this doesn't work, copy these lines of code to the scripts you will run:

              import os
              os.add_dll_directory("/path/to/OpenSim 4.x/bin")
    3. (On other operating systems) Follow the instructions to setup the opensim scripting environment here

    4. Copy all *.obj files from resources/opensim/geometry to <installation_folder>/OpenSim 4.x/Geometry

    Note: Scripts requiring to import OpenSim are only verified on Windows.

Dataset and SMPL+H models

  1. BMLmovi
    • Register to get access to the downloads section.
    • Download .avi videos of PG1 and PG2 cameras from the F round (F_PGX_Subject_X_L.avi).
    • Download Camera Parameters.tar.
    • Download v3d files (F_Subjects_1_45.tar, F_Subjects_46_90.tar).
  2. AMASS
    • Download SMPL+H body data of BMLmovi.
  3. SMPL+H Models
    • Register to get access to the downloads section.
    • Download the extended SMPL+H model (used in AMASS project).
  4. DMPLs
    • Register to get access to the downloads section.
    • Download DMPLs for AMASS.
  5. PASCAL Visual Object Classes (ONLY NECESSARY FOR TRAINING)
    • Download the training/validation data

NOTES:

  • some videos are missing in the database, namely indices 7, 10, and 26 are missing.
  • The raw data will take up about 100 GB of disk space, and an additional 700 - 800 GB after processing. So make sure you have enough free storage space!

Unpacking resources

  1. Unpack the downloaded SMPL and DMPL archives into ms_model_estimation/resources

  2. Unpack the downloaded AMASS data into the top-level folder resources/amass

  3. Unpack the F_Subjects_1_45 folder and unpack content of all subfolders into resources/V3D/F

OpenSim GT Generation

Run the generate_opensim_gt script:

python generate_opensim_gt.py

This process might take several hours!

Once the dataset is generated the scaled OpenSim model and motion files can be found in resources/opensim/BMLmovi/BMLmovi

Dataset Preparation

After the ground truth has been generated, the dataset needs to be prepared. Before you start, make sure you have created folders _dataset and _dataset_full in the root folder of the cloned repository.

Run the prepare_dataset script and provide the location where the BMLMovi videos are stored:

python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos

NOTE: for generating data for training you should also provide the path to the Pascal VOC dataset

python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos --PascalDir path/to/pascal_voc/data

This process might again take several hours!

Evaluation

Download models

Run inference

Run the run_inference script :

python run_inference.py

This will use D3KE to run predictions on the subset of BMLMovi used for testing.

Training

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •