Skip to content
Ayushman Dash edited this page Sep 12, 2017 · 27 revisions

Introduction

At OSA-alpha, we are aiming at solving five important tasks in Conversational Intelligence. These tasks have been developed keeping in mind their business utility and scientific contribution. The following are the tasks

1. A Deep Conversational Framework (Ovation-CI): Build a deep learning based Conversational Intelligence Framework that is actively trainable, stand-alone deployable, RESTful and easily extensible.

2. Core Components of Ovation-CI: Develop the building blocks of a Conversational Intelligence framework, which are,

        a. Intent Classification

        b. Entity Recognition

        c. Sentiment Analysis

3. Business Use-cases: Develop demos for two business use-cases that will be presented at OSA-alpha using some existing open-source frameworks like Rasa.

4. Ovation Voice: Develop demos for the same use-cases (as mentioned above) using the Ovation Voice Interface.

The following sections will describe each of them in detail.

1. A Deep Learning based Conversational Framework (Ovation-CI)

Every chatbot is built upon a Conversational Intelligence framework that is composed of many components. These components enable the chatbot with various functionalities like (1) classifying the intent of what the user said (2) extracting the entities of in the users' statement (3) extracting the sentiment of the users' statement, (4) generating a response or sampling a response from a predefined set of responses, as a reply to the users' statement (query). A conversational Intelligence framework is not limited to just these components. It can have any other component that helps in making the conversation more natural with a user or extract information from what the user said.

This framework needs to have four important features, which are,

1. Active Learnability: A chatbot developer should be able to develop new chat scenarios and train a new model using Ovation-CI whenever he/she wants.

2. Stand-Alone Deployability: Ovation-CI should be deployable as a server and used stand-alone. Ideally, its components should be modular enough to give scope for heterogeneity.

3. RESTful: Ovation-CI should be accessible by RESTful APIs. E.g., you should be able to make an API call like http://<your-domain>/ovation?q="hello Ovation" and get a response like

{
  "query": "hello Ovation",
  "intents": [
    {
      "intent": "greetings",
      "intent_id": 1,
      "confidence": 0.85
    },
    {
      "intent": "welcome",
      "intent_id": 2,
      "confidence": 0.15
    }
    ],
    "entities": [
        {
          "start": 6,
          "end": 12,
          "value": "Ovation",
          "entity": "organisation",
          "confidence": 0.76
        }
    ]
}

4. Easily Extensible: Ovation-CI should be easily extensible by having scope for adding new components to its processing pipeline (described below).

Ovation-CI Architecture

The Ovation-CI architecture that we have in mind is shown in the image below. It is just for reference and not for developing something exactly like this. We want our participants to come up with innovative ideas and extend the architecture mentioned below and develop it.

Ovation-CI Architecture

The following are the components of the above architecture,

Endpoints

An endpoint is a REST API available in the Ovation-CI server. These APIs are the endpoints for any communication with it. The following are the endpoints that are mentioned in the above architecture.

1. /train can be called when a chatbot developer wants to develop his/her own model (e.g., an insurance enquiry bot). This endpoint receives data in the format

{
  "data": [
    {
      "text": "yes",
      "intent": "affirm",
      "entities": []
    },
    {
      "text": "yep",
      "intent": "affirm",
      "entities": []
    },
    {
      "text": "yeah",
      "intent": "affirm",
      "entities": []
    },
    {
        "text": "Techniker Krankenkasse offices in Kaiserslautern",
        "intent": "inquiry",
        "entities": [
          {
            "start": 34,
            "end": 48,
            "value": "Kaiserslautern",
            "entity": "location"
          },
          {
            "start": 0,
            "end": 22,
            "value": "Techniker Krankenkasse",
            "entity": "organisation"
          }
        ]
      }
    ]
}

We used the above example from rasa-nlu-trainer, which can be used to generate data in this format. This data format is also an example and we expect our participants to come up with their own innovative ideas for structuring the data too.

/train invokes the pipeline, which processes the data that /train receives and trains the individual Blocks of the pipeline.

2. /ovation can be called when a users' query needs to be processed. Given a query like http://<your-domain>:<port>/ovation/q="hello ovation" this endpoint should return a json like

{
  "query": "hello Ovation",
  "intents": [
    {
      "intent": "greetings",
      "intent_id": 1,
      "confidence": 0.85
    },
    {
      "intent": "welcome",
      "intent_id": 2,
      "confidence": 0.15
    }
    ],
    "entities": [
        {
          "start": 6,
          "end": 12,
          "value": "Ovation",
          "entity": "organisation",
          "confidence": 0.76
        }
    ]
}

This endpoint invokes the predict() method of all the Blocks in the Pipeline

More details on Blocks and Pipeline in the following sections

Block

A pipeline is built up of Blocks. Blocks can be run in two modes. (1) Train and (2) Infer. In the Train mode a Block needs to be trained given some input data, and in Infer some inference needs to be made on a query or some information needs to be extracted from the users' query. Blocks are ideally Classes which implement the following methods,

  1. preinit(): Initialize or load all the files and data structures that will be required throughout the Block.

  2. train(): When called, should train a model (if required) which will later be used to make some inferences on the users' query or extract some information out of it.

  3. save(): Save the trained model (if any) to disk.

  4. cleanup(): release resources (if any).

  5. infer(): When called, should make an inference on the user's query and return the inference.

The Train Mode

In the train mode the Block receives some data and it calles the train(), save(), and cleanup() methods one after the other. This makes sure that a new model has been trained and persisted for future use.

The Infer Mode

In the Infer Mode the infer() method of the Block is called and the output is collected before moving on to the next Block.

Life Cycle of a Block

Every Block has three phases in its life cycle.

  1. initializing: Called when
  2. is_training
  3. is_inferring

Pipeline

A pipeline is a sequence of text processing blocks, which receives a text as input and gives extracted information from the text as output. E.g., if the input text is /train/?q="Techniker Krankenkasse offices in Kaiserslautern" the response from the pipeline should be

{
  "query": "Techniker Krankenkasse offices in Kaiserslautern",
  "intents": [
    {
      "intent": "inquiry",
      "intent_id": 1,
      "confidence": 0.85
    },
    {
      "intent": "affirm",
      "intent_id": 2,
      "confidence": 0.15
    }
    ],
    "entities": [
        {
          "start": 6,
          "end": 12,
          "value": "Ovation",
          "entity": "organisation",
          "confidence": 0.76
        }
    ]
}