This document has instructions for running Neural Collaborative Filtering (NCF) FP32 inference using Intel-optimized TensorFlow.
Download movielens 1M dataset
wget http://files.grouplens.org/datasets/movielens/ml-1m.zip
unzip ml-1m.zip
Set the DATASET_DIR
to point to this directory when running NCF.
Script name | Description |
---|---|
fp32_online_inference.sh |
Runs online inference (batch_size=1). |
fp32_batch_inference.sh |
Runs batch inference (batch_size=256). |
fp32_accuracy.sh |
Measures the model accuracy (batch_size=256). |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit | Setup without AI Kit |
---|---|
AI Kit does not currently support TF 1.15.2 models |
To run without AI Kit you will need:
|
Running NCF also requires a clone of the
TensorFlow models repository with
at the v1.11
tag. Set the TF_MODELS_DIR
env var to the path of your clone.
git clone https://github.com/tensorflow/models.git tf_models
cd tf_models
git checkout v1.11
export TF_MODELS_DIR=$(pwd)
cd ..
Download and extract the pretrained model and set the path to the
PRETRAINED_MODEL
env var.
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_5/ncf_fp32_pretrained_model.tar.gz
tar -xzvf ncf_fp32_pretrained_model.tar.gz
export PRETRAINED_MODEL=$(pwd)/ncf_trained_movielens_1m
After your environment is setup, set environment variables to the DATASET_DIR
and an OUTPUT_DIR
where log files will be written. Ensure that you already have
the TF_MODELS_DIR
and PRETRAINED_MODEL
paths set from the previous commands.
Once the environment variables are all set, you can run a
quickstart script.
# cd to your model zoo directory
cd models
export DATASET_DIR=<path to the dataset>
export OUTPUT_DIR=<path to the directory where log files will be written>
export TF_MODELS_DIR=<path to the TensorFlow models directory tf_models>
export PRETRAINED_MODEL=<path to the pretrained model>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>
./quickstart/recommendation/tensorflow/ncf/inference/cpu/fp32/<script name>.sh
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/ncf-fp32-inference-tensorflow-container.html.