Skip to content
This repository has been archived by the owner on May 21, 2019. It is now read-only.

Latest commit

 

History

History
132 lines (87 loc) · 5.85 KB

INSTRUCTIONS.md

File metadata and controls

132 lines (87 loc) · 5.85 KB

Watson Hands On Labs - Visual Recognition

During this lab, you will use the Visual Recognition service to train a classifier and recognize images.

You can see a version of this app that is already running here.

So let’s get started. The first thing to do is to build out the shell of our application in the IBM Cloud.

Prerequisites

  1. Sign up for an IBM Cloud account.
  2. Download the IBM Cloud CLI.
  3. Create an instance of the Visual Recognition service and get your credentials:
    • Go to the Visual Recognition page in the IBM Cloud Catalog.
    • Log in to your IBM Cloud account.
    • Click Create.
    • Click Show to view the service credentials.
    • Copy the apikey value
    • Copy the url value.

Note: The confirmation email from the IBM Cloud mail take up to 1 hour.

Deploy this sample application in the IBM Cloud

  1. Clone the repository into your computer and navigate to the new directory.

    git clone https://github.com/watson-developer-cloud/visual-recognition-nodejs.git
    cd visual-recognition-nodejs
    
  2. Sign up in the IBM Cloud or use an existing account.

  3. If it is not already installed on your system, download and install the Cloud-foundry CLI tool.

  4. Edit the manifest.yml file in the folder that contains your code and replace visual-recognition with a unique name for your application. The name that you specify determines the application's URL, such as your-application-name.mybluemix.net. The relevant portion of the manifest.yml file looks like the following:

    applications:
    - name: visual-recognition-demo
     command: npm start
     path: .
     memory: 512M
     env:
      NODE_ENV: production
  5. Copy the credentials from the prerequisites to the application by creating a .env file using this format:

VISUAL_RECOGNITION_IAM_API_KEY=<your-api-key>
VISUAL_RECOGNITION_URL=<your-url>
  1. Install the dependencies you application need:
npm install
  1. Start the application by running:
npm start
  1. Test your application locally by going to: http://localhost:3000/

Deploying to IBM Cloud as a Cloud Foundry Application

  1. Login to IBM Cloud with the IBM Cloud CLI

    ibmcloud login
    
  2. Target a Cloud Foundry organization and space.

    ibmcloud target --cf
    
  3. Edit the manifest.yml file. Change the name field to something unique.
    For example, - name: my-app-name.

  4. Deploy the application

    ibmcloud app push
    
  5. View the application online at the app URL.
    For example: https://my-app-name.mybluemix.net

Classifying Images in the Starter Application

The application is composed of two sections, a "Try" section and a "Train" section. The Try section will allow you to send an individual image to the Visual Recognition service to be classified.

Test out the existing service by selecting one of the provided images or pasting a URL for an image of your choice. You will see the service respond with a collection of recognized attributes about the image.

Next, try running the following image through the classifier by pasting the URL into the "Try" panel.

You'll see that it's recognized some general attributes about the image, but we want it to be able to specifically recognize it as a fruitbowl. To do that, we will need to train a customer classifier.

Training a Custom Classifier in the Starter App

Navigate over to the "Train" window in the application.

Here, you will see a collection of training sets that have been provided for you. If you select any one of these, you will see that set expand to show a series of classes that will be trained, as well as negative examples of that group. For example, the Dog Breeds classifier contains 4 classes of dogs to be identified, as well as a negative example data set of Non-dogs.

To train the service to specifically classify a fruitbowl, we are going to use two collections of images to teach Watson what to recognize when classifying a fruitbowl. Click on the "Use your Own" box, and afterward a series of boxes will appear to allow you to upload .zip files for the classes.

Download and select the following .zip files for the classifier:

Once the two zip files are included, name the classifier "fruitbowl" and select the "Train your classifier" button

The classifier may take a couple minutes to train, and once it is complete the application will update to allow you to submit new images against that classifier. If you submit the original image that we used in the new prompt on the "Train" window, you will see that it will be specifically classified based on our new training!

Congratulations

You have completed the Visual Recognition Lab! :bowtie:

Congratulations