Skip to content

A Deep Conversational Framework (Ovation CI)

John Gamboa edited this page Sep 18, 2017 · 2 revisions

We want to build a deep learning based Conversational Intelligence Framework that is actively trainable, stand-alone deployable, is RESTful and easily extensible.

Every chatbot is built upon a Conversational Intelligence framework that is composed of many components. These components enable the chatbot with various functionalities like (1) classifying the intent of what the user said (2) extracting the entities of in the user's statement (3) extracting the sentiment of the user's statement, (4) generating a response or sampling a response from a predefined set of responses, as a reply to the users' statement (query). A conversational Intelligence framework is not limited to just these components. It can have any other component that helps in making the conversation more natural with a user or extract information from what the user said.

This framework needs to have four important features, which are,

1. Active Learnability: A chatbot developer should be able to develop new chat scenarios and train a new model using Ovation-CI whenever he/she wants.

2. Stand-Alone Deployability: Ovation-CI should be deployable as a server and used stand-alone. Ideally, its components should be modular enough to give scope for heterogeneity.

3. RESTful: Ovation-CI should be accessible by RESTful APIs. E.g., you should be able to make an API call like http://<your-domain>/ovation?q="hello Ovation" and get a response like

{
  "query": "hello Ovation",
  "sentiment": {"score": 0.5, "coarse": "neutral", "cause": [] },
  "intents": [
    {
      "intent": "greetings",
      "intent_id": 1,
      "confidence": 0.85
    },
    {
      "intent": "welcome",
      "intent_id": 2,
      "confidence": 0.15
    }
    ],
    "entities": [
        {
          "start": 6,
          "end": 12,
          "value": "Ovation",
          "entity": "organisation",
          "confidence": 0.76
        }
    ]
}

4. Easily Extensible: Ovation-CI should be easily extensible by having scope for adding new components to its processing pipeline (described below).

Ovation-CI Architecture

The Ovation-CI architecture that we have in mind is shown in the image below. It is just for reference and not for developing something exactly like this. We want our participants to come up with innovative ideas and extend the architecture mentioned below and develop it.

Ovation-CI Architecture

The following are the components of the above architecture,

Endpoints

An endpoint is a REST API available in the Ovation-CI server. These APIs are the endpoints for any communication with it. The following are the endpoints that are mentioned in the above architecture.

1. /train can be called when a chatbot developer wants to develop his/her own model (e.g., an insurance enquiry bot). This endpoint receives data in the format

{
  "data": [
    {
      "text": "yes",
      "intent": "affirm",
      "entities": []
    },
    {
      "text": "yep",
      "intent": "affirm",
      "entities": []
    },
    {
      "text": "yeah",
      "intent": "affirm",
      "entities": []
    },
    {
        "text": "Techniker Krankenkasse offices in Kaiserslautern",
        "intent": "inquiry",
        "entities": [
          {
            "start": 34,
            "end": 48,
            "value": "Kaiserslautern",
            "entity": "location"
          },
          {
            "start": 0,
            "end": 22,
            "value": "Techniker Krankenkasse",
            "entity": "organisation"
          }
        ]
      }
    ]
}

We used the above example from rasa-nlu-trainer, which can be used to generate data in this format. This data format is also an example and we expect our participants to come up with their own innovative ideas for structuring the data too.

/train invokes the pipeline, which processes the data that /train receives and trains the individual Blocks of the pipeline.

2. /ovation can be called when a users' query needs to be processed. Given a query like http://<your-domain>:<port>/ovation/q="hello ovation" this endpoint should return a json like

{
  "query": "hello Ovation",
  "intents": [
    {
      "intent": "greetings",
      "intent_id": 1,
      "confidence": 0.85
    },
    {
      "intent": "welcome",
      "intent_id": 2,
      "confidence": 0.15
    }
    ],
    "entities": [
        {
          "start": 6,
          "end": 12,
          "value": "Ovation",
          "entity": "organisation",
          "confidence": 0.76
        }
    ]
}

This endpoint invokes the predict() method of all the Blocks in the Pipeline

More details on Blocks and Pipeline in the following sections

Block

A pipeline is built up of Blocks. Blocks can be run in two modes. (1) Train and (2) Infer. In the Train mode a Block needs to be trained given some input data, and in Infer some inference needs to be made on a query or some information needs to be extracted from the users' query. Blocks are ideally Classes which implement the following methods,

  1. preinit(): Initialize or load all the files and data structures that will be required throughout the Block.

  2. train(): When called, should train a model (if required) which will later be used to make some inferences on the users' query or extract some information out of it.

  3. save(): Save the trained model (if any) to disk.

  4. cleanup(): release resources (if any).

  5. infer(): When called, should make an inference on the user's query and return the inference.

The Train Mode

In the train mode the Block receives some data and it calls the train(), save(), and cleanup() methods one after the other. This makes sure that a new model has been trained and persisted for future use.

The Infer Mode

In the Infer Mode, the infer() method of the Block is called and the output is collected before moving on to the next Block.

Life Cycle of a Block

Every Block has three phases in its life cycle.

  1. initializing: preinit()
  2. is_training: train(), save(), cleanup()
  3. is_inferring infer()

The methods that are called during this phase are mentioned on the right. These phases are invoked at specific phases of a Pipeline's life cycle (explained in the section below).

Pipeline

A Pipeline is a sequence of Blocks. Like Blocks, a Pipeline also has a life cycle with three phases, which are

  1. initializing: In which the preinit() of all the Blocks are called.
  2. is_training: In this phase, all the Blocks are run in their Train mode
  3. is_inferring: In this phase, all the Blocks are run in their Infer mode

In the architecture diagram, the numbers in yellow circles show the sequence in which the methods will be called. This should help you connect the dots and understand the idea of Pipelines and Blocks better.

Dynamic Loading of Blocks and the Pipeline

Keep in mind that the Ovation-CI framework needs to be easily extensible. So, Pipelines should be ideally defined in a config file and all the Blocks should be loaded at runtime. E.g., if you decide to create a config file called config.json you can use a list to keep the Class names of all the Blocks. This list could be used in your code to load individual Blocks.

{
  "pipeline": ["IntentCliassifier", "EntityRecognizer", "SentimentAnalyzer", "ResponseBuilder"]
}

Additional Information

Note that we have not defined the method signatures, data-structures and the data flow. These things are subject to change as they will depend a lot on how you wish to implement Ovation-CI. The other modules that we have defined can be changed too if you have a new innovative idea to implement them. Keep in mind that this is just to help you get the idea and not to force you to implement something exactly the same.