Sample for THINK 2019 user experience session
Indoor scavenger hunt BINGO | San Francisco scavenger hunt BINGO |
---|---|
These instructions describe how to build a scavenger hunt BINGO sample web app with given, indoor objects.
After you get the sample working, collect images of your favorite objects or tourist destinations and make your own scavenger hunt BINGO app!
Sign up for IBM Cloud: IBM Cloud sign up
Create an instance of the IBM Watson Studio service on IBM Cloud: IBM Watson Studio
Create a project in Watson Studio:
- Go to https://dataplatform.cloud.ibm.com and log in (if you are not already logged in)
- Click New project, select Visual Recognition, and then follow the prompts to associate needed services with the project: IBM Cloud Object Storage and IBM Watson Visual Recognition.
See also: Creating projects
To be able to run the sample web app on your local computer, install Python
- Make sure to have the installer add Python to your environments variables
- Mac users, also install
pip
by issuing this command:sudo easy_install pip
- Mac users, also add your user base binary directory to your path:
- Find the user base binary directory by running this command:
python -m site --user-base
- Add your user base binary directory, with
/bin
appended, to the file/etc/paths
- Find the user base binary directory by running this command:
- To be able to push the sample web app to IBM Cloud, install the IBM Cloud CLI
-
Download these 12 .zip files to your local computer: Training data
-
Download these 11 images to your local computer: Test images
The sample training data includes 25 images of 11 indoor objects:
bowl |
brush |
bucket |
cup |
glove |
hockey tape |
measuring tape |
pig |
puzzle |
shoe |
stapler |
The images include 8 different backgrounds:
white |
yellow |
beige |
blue |
green |
black |
dark wood |
light wood |
The file _negative.zip
contains images of only backgrounds, to be used as a negative class in training the model.
The sample test images are 11 images that were not part of the training data:
-
With the IBM Watson Visual Recognition service, you can use images as small as 224 x 224 pixels with no loss of performance. So, preprocessing training images to be 224 x 224 can make life easier (faster upload times, for example, than when using larger images.)
-
The guidelines recommend to "make sure that the backgrounds in your training images are comparable to what you expect to classify." In our scavenger hunt scenario, the run-time background might vary. So, the sample training images include a variety of possible backgrounds.
-
The guidelines recommend including at least 50 training images in each class. However, if you don't have 50 images for one or more classes, try to train the model with what you have, because it might work well enough for you. (The sample training data here has 25 images for each class.)
-
Because objects might be in any orientation in a scavenger hunt scenario, the training data includes images of the objects positioned every which way. For use cases where you know the run-time orientation of objects being classified, this might not be what you want to do.
-
Including a negative class in training isn't always needed. Experiment to determine what works best for your case. (This sample includes a negative class.)
-
The guidelines recommend that the subject in the images take up at least 1/3 of the image. In our case, we made a guess about where people playing a scavenger hunt would position their camera. This meant that the measuring tape and hockey tape would be smaller in the training images than the other objects.
See: IBM Watson Visual Recognition guidelines for good training
-
Click Add to project and then click VISUAL RECOGNITION MODEL. Follow prompts to associate an instance of the IBM Visual Recognition service with your project. This opens the visual recognition model builder.
-
Replace the name "Default Custom Model" with a name you choose.
-
In the data panel, drag and drop (or browse for) the 12 .zip files you downloaded in Step 1.
-
In the data panel, select all of the the .zip files except
_negative.zip
, and then click Add to model. -
Rename each of the classes to remove
.zip
from the end of the name. -
From the data panel, drag the file
_negative.zip
onto the Negative class card. -
Click Train model.
See also: Training a visual recognition model
Demo video
-
When training is complete, a link to the model details page is given in a message. Click the link to go to the model details page. (Alternatively, click on the model name in the Assets page of your project to get to the model details page.)
-
Click the Test tab.
-
Download these test images to your local computer: Test images
-
Drag test images onto the test area for classification.
Demo video
Add a sample notebook to your project:
- Click Add to project and then click NOTEBOOK
- Click the tab labeled From URL
- In the box labeled Notebook URL, paste in this URL: Sample notebook
- Give the notebook a name
- Click Create Notebook
Paste your model ID and credentials into the notebook:
- From the Services sub-menu of the main, navigation menu, open Watson Services in a new browser tab
- Beside your instance of the IBM Watson Visual Recognition service, click Launch tool
- In the Overview tab, scroll down to find the model you created in Step 2, and then copy the model ID
- Paste the model ID in the notebook where needed
- Back in the Credentials tab of the Visual Recognition tool, create some test credentials, and then copy the
apikey
value - Paste the apikey in the notebook where needed
Read, explore, and run the cells of the sample notebook. Learn how to use the Watson Visual Recognition Python client to classify test images. And begin to define functions that would be needed in a demo Python web app.
See also:
Demo video
-
Download and unzip the sample app from here: Sample Python Flask scavenger hunt app
-
In the file
server.py
, paste your model ID and credentials (just like in the sample notebook) -
Notice that the functions
getKey
,getTopClass
,classifyImage
, andresizeImage
that were prototyped in the notebook are used in the fileserver.py
File | Description |
---|---|
README.md |
Instructions for running the app locally and pushing the app to IBM Cloud |
server.py |
Python Flask server code for the app |
static/index.html |
HTML for the web page interface of the app |
static/javascript/javascript.js |
Javascript functions implementation callbacks for the web page |
static/css/styles.css |
Controls the appearance of the web page |
static/images/exemplars/*.png |
Images for the BINGO card |
static/audio/*.wav |
Audio for indicating classification results |
Demo video
- Open a command prompt and then navigate to the directory contianing the file
server.py
- From the command line, start the Python Flask server by issuing the following command:
python server.py
- Open a web browser to: http://localhost:8000/
- Classify one of the test images
Demo video
In IBM Cloud, create a Python Flask Cloud Foundry app, size 128 MB: Python Flask starter app
In the local file named
manifest.yml
, replaceapp-name
with the name you chose for your Python Flask app starter:applications: - name: app-name memory: 128M
In the local file named
setup.py
, replaceapp-name
with the name you chose for your Python Flask app starter:setup( name='app-name', version='1.0.0', ...
On the command line, login to your IBM Cloud account by issuing the following command:
ibmcloud login
On the command line, target the CloudFoundry API endpoint by issuing the following command:
ibmcloud target --cf
On the command line, from the app working directory (where the file server.py is located) push your app to IBM Cloud by issuing the following command:
ibmcloud app push
See also:
Demo video