Skip to content

spackows/THINK-2019_Scavenger_hunt

Repository files navigation

THINK-2019_Scavenger_hunt

Sample for THINK 2019 user experience session

 

Demo videos

Indoor scavenger hunt BINGO San Francisco scavenger hunt BINGO

 

Instructions

These instructions describe how to build a scavenger hunt BINGO sample web app with given, indoor objects.

After you get the sample working, collect images of your favorite objects or tourist destinations and make your own scavenger hunt BINGO app!

 

Prerequisites

  1. Sign up for IBM Cloud: IBM Cloud sign up

  2. Create an instance of the IBM Watson Studio service on IBM Cloud: IBM Watson Studio

  3. Create a project in Watson Studio:

    1. Go to https://dataplatform.cloud.ibm.com and log in (if you are not already logged in)
    2. Click New project, select Visual Recognition, and then follow the prompts to associate needed services with the project: IBM Cloud Object Storage and IBM Watson Visual Recognition.

    See also: Creating projects

  4. To be able to run the sample web app on your local computer, install Python

    • Make sure to have the installer add Python to your environments variables
    • Mac users, also install pip by issuing this command:
      sudo easy_install pip
    • Mac users, also add your user base binary directory to your path:
      1. Find the user base binary directory by running this command:
        python -m site --user-base
      2. Add your user base binary directory, with /bin appended, to the file /etc/paths

      See: Complete instructions

  5. To be able to push the sample web app to IBM Cloud, install the IBM Cloud CLI

 

Step 1: Collect training and test data

  1. Download these 12 .zip files to your local computer: Training data

  2. Download these 11 images to your local computer: Test images

About the sample training data

The sample training data includes 25 images of 11 indoor objects:

bowl
brush
bucket
cup
glove
hockey tape
measuring tape
pig
puzzle
shoe
stapler

The images include 8 different backgrounds:

white
yellow
beige
blue
green
black
dark wood
light wood

The file _negative.zip contains images of only backgrounds, to be used as a negative class in training the model.

About the sample test images

The sample test images are 11 images that were not part of the training data:

Tips and comments

  • With the IBM Watson Visual Recognition service, you can use images as small as 224 x 224 pixels with no loss of performance. So, preprocessing training images to be 224 x 224 can make life easier (faster upload times, for example, than when using larger images.)

  • The guidelines recommend to "make sure that the backgrounds in your training images are comparable to what you expect to classify." In our scavenger hunt scenario, the run-time background might vary. So, the sample training images include a variety of possible backgrounds.

  • The guidelines recommend including at least 50 training images in each class. However, if you don't have 50 images for one or more classes, try to train the model with what you have, because it might work well enough for you. (The sample training data here has 25 images for each class.)

  • Because objects might be in any orientation in a scavenger hunt scenario, the training data includes images of the objects positioned every which way. For use cases where you know the run-time orientation of objects being classified, this might not be what you want to do.

  • Including a negative class in training isn't always needed. Experiment to determine what works best for your case. (This sample includes a negative class.)

  • The guidelines recommend that the subject in the images take up at least 1/3 of the image. In our case, we made a guess about where people playing a scavenger hunt would position their camera. This meant that the measuring tape and hockey tape would be smaller in the training images than the other objects.

See: IBM Watson Visual Recognition guidelines for good training

 

Step 2: Create a visual recognition model in your Watson Studio project

  1. Click Add to project and then click VISUAL RECOGNITION MODEL. Follow prompts to associate an instance of the IBM Visual Recognition service with your project. This opens the visual recognition model builder.

  2. Replace the name "Default Custom Model" with a name you choose.

  3. In the data panel, drag and drop (or browse for) the 12 .zip files you downloaded in Step 1.

  4. In the data panel, select all of the the .zip files except _negative.zip, and then click Add to model.

  5. Rename each of the classes to remove .zip from the end of the name.

  6. From the data panel, drag the file _negative.zip onto the Negative class card.

  7. Click Train model.

See also: Training a visual recognition model

Demo video

 

Step 3 Test the model in Watson Studio

  1. When training is complete, a link to the model details page is given in a message. Click the link to go to the model details page. (Alternatively, click on the model name in the Assets page of your project to get to the model details page.)

  2. Click the Test tab.

  3. Download these test images to your local computer: Test images

  4. Drag test images onto the test area for classification.

Demo video

 

Step 4: Prototype app code in a notebook in Watson Studio

  1. Add a sample notebook to your project:

    1. Click Add to project and then click NOTEBOOK
    2. Click the tab labeled From URL
    3. In the box labeled Notebook URL, paste in this URL: Sample notebook
    4. Give the notebook a name
    5. Click Create Notebook
  2. Paste your model ID and credentials into the notebook:

    1. From the Services sub-menu of the main, navigation menu, open Watson Services in a new browser tab
    2. Beside your instance of the IBM Watson Visual Recognition service, click Launch tool
    3. In the Overview tab, scroll down to find the model you created in Step 2, and then copy the model ID
    4. Paste the model ID in the notebook where needed
    5. Back in the Credentials tab of the Visual Recognition tool, create some test credentials, and then copy the apikey value
    6. Paste the apikey in the notebook where needed
  3. Read, explore, and run the cells of the sample notebook. Learn how to use the Watson Visual Recognition Python client to classify test images. And begin to define functions that would be needed in a demo Python web app.

See also:

Demo video

 

Step 5: Copy prototype code into a web app

  1. Download and unzip the sample app from here: Sample Python Flask scavenger hunt app

  2. In the file server.py, paste your model ID and credentials (just like in the sample notebook)

  3. Notice that the functions getKey, getTopClass, classifyImage, and resizeImage that were prototyped in the notebook are used in the file server.py

Sample file highlights

File Description
README.md Instructions for running the app locally and pushing the app to IBM Cloud
server.py Python Flask server code for the app
static/index.html HTML for the web page interface of the app
static/javascript/javascript.js Javascript functions implementation callbacks for the web page
static/css/styles.css Controls the appearance of the web page
static/images/exemplars/*.png Images for the BINGO card
static/audio/*.wav Audio for indicating classification results

Demo video

 

Step 6: Run the app on your local computer

  1. Open a command prompt and then navigate to the directory contianing the file server.py
  2. From the command line, start the Python Flask server by issuing the following command:
    python server.py
  3. Open a web browser to: http://localhost:8000/
  4. Classify one of the test images

Demo video

 

Step 7: Push the app to the public cloud

  1. In IBM Cloud, create a Python Flask Cloud Foundry app, size 128 MB: Python Flask starter app

  2. In the local file named manifest.yml, replace app-name with the name you chose for your Python Flask app starter:

    applications:
    - name: app-name
      memory: 128M
    

  3. In the local file named setup.py, replace app-name with the name you chose for your Python Flask app starter:

    setup(
        name='app-name',
        version='1.0.0',
    ...
    

  4. On the command line, login to your IBM Cloud account by issuing the following command:

    ibmcloud login
    

  5. On the command line, target the CloudFoundry API endpoint by issuing the following command:

    ibmcloud target --cf
    

  6. On the command line, from the app working directory (where the file server.py is located) push your app to IBM Cloud by issuing the following command:

    ibmcloud app push
    

See also:

Demo video

 

About

Samples for THINK 2019 user experience session

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages