Skip to content

Latest commit

 

History

History
116 lines (98 loc) · 3.88 KB

File metadata and controls

116 lines (98 loc) · 3.88 KB

NCF FP32 inference

Description

This document has instructions for running Neural Collaborative Filtering (NCF) FP32 inference using Intel-optimized TensorFlow.

Datasets

Download movielens 1M dataset

wget http://files.grouplens.org/datasets/movielens/ml-1m.zip
unzip ml-1m.zip

Set the DATASET_DIR to point to this directory when running NCF.

Quick Start Scripts

Script name Description
fp32_online_inference.sh Runs online inference (batch_size=1).
fp32_batch_inference.sh Runs batch inference (batch_size=256).
fp32_accuracy.sh Measures the model accuracy (batch_size=256).

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit:

Setup using AI Kit Setup without AI Kit

AI Kit does not currently support TF 1.15.2 models

To run without AI Kit you will need:

  • Python 3
  • intel-tensorflow==1.15.2
  • numactl
  • git
  • google-api-python-client==1.6.7
  • google-cloud-bigquery==0.31.0
  • kaggle==1.3.9
  • numpy==1.16.3
  • oauth2client==4.1.2
  • pandas
  • psutil>=5.6.7
  • py-cpuinfo==3.3.0
  • tar
  • typing
  • wget
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

Running NCF also requires a clone of the TensorFlow models repository with at the v1.11 tag. Set the TF_MODELS_DIR env var to the path of your clone.

git clone https://github.com/tensorflow/models.git tf_models
cd tf_models
git checkout v1.11
export TF_MODELS_DIR=$(pwd)
cd ..

Download and extract the pretrained model and set the path to the PRETRAINED_MODEL env var.

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_5/ncf_fp32_pretrained_model.tar.gz
tar -xzvf ncf_fp32_pretrained_model.tar.gz
export PRETRAINED_MODEL=$(pwd)/ncf_trained_movielens_1m

After your environment is setup, set environment variables to the DATASET_DIR and an OUTPUT_DIR where log files will be written. Ensure that you already have the TF_MODELS_DIR and PRETRAINED_MODEL paths set from the previous commands. Once the environment variables are all set, you can run a quickstart script.

# cd to your model zoo directory
cd models

export DATASET_DIR=<path to the dataset>
export OUTPUT_DIR=<path to the directory where log files will be written>
export TF_MODELS_DIR=<path to the TensorFlow models directory tf_models>
export PRETRAINED_MODEL=<path to the pretrained model>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>

./quickstart/recommendation/tensorflow/ncf/inference/cpu/fp32/<script name>.sh

Additional Resources