Skip to content

Latest commit

 

History

History
275 lines (226 loc) · 8.81 KB

README.md

File metadata and controls

275 lines (226 loc) · 8.81 KB

OpenChat: Easy to use opensource chatting framework via neural networks

Run on Ainize

   ____   ____   ______ _   __   ______ __  __ ___   ______
  / __ \ / __ \ / ____// | / /  / ____// / / //   | /_  __/
 / / / // /_/ // __/  /  |/ /  / /    / /_/ // /| |  / /   
/ /_/ // ____// /___ / /|  /  / /___ / __  // ___ | / /    
\____//_/    /_____//_/ |_/   \____//_/ /_//_/  |_|/_/     
  • OpenChat is easy to use opensource chatting framework.
  • OpenChat supports 40+ dialogue model based on neural networks.
  • You can talk with AI with only one line of code.



Input & Output Information

Post parameter

bot_id: This is the name of the bot.

text: This is the chat to talk to the bot.

topic: This is the topic of the chat.

agent: The model type of the Bot you want to chat with. 
      ["blender.small", "blender.medium", 'dialogpt.small', 
       'dialogpt.medium', 'gptneo.small', 'gptneo.large']

Output format

{"0": [AI's chatting - string]}  

Try it out

With CLI

  • Input

※ "S2lt" is the user ID to use when chatting.

※ You can change the "S2lt" of the url to any name you want.

bot_id: Mr.Bot

text: Hey, What are you going to do?

topic: weekend

agent: DIALOGPT.MEDIUM

curl -X POST "https://main-openchat-fpem123.endpoint.ainize.ai/send/S2lt" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "bot_id=Mr.Bot" -F "text=Hey, What are you going to do?" -F "topic=weekend" -F "agent=DIALOGPT.MEDIUM"
  • Output

    { "output": "I don't know." }

With Demo

Demo page: End-point

With Swagger

API page: In Ainize



Installation

pip install openchat



Supported Models

  • OpenChat supports 40+ dialogue models based on neural networks.
  • Use these names as parameter model='name' when you create OpenChat.
  • Click here if you want to check supported models.
    • gptneo.small
    • gptneo.medium
    • gptneo.large
    • gptneo.xlarge
    • blender.small
    • blender.medium
    • blender.large
    • blender.xlarge
    • blender.xxlarge
    • dialogpt.small
    • dialogpt.medium
    • dialogpt.large
    • dodecathlon.all_tasks_mt
    • dodecathlon.convai2
    • dodecathlon.wizard_of_wikipedia
    • dodecathlon.empathetic_dialogues
    • dodecathlon.eli5
    • dodecathlon.reddit
    • dodecathlon.twitter
    • dodecathlon.ubuntu
    • dodecathlon.image_chat
    • dodecathlon.cornell_movie
    • dodecathlon.light_dialog
    • dodecathlon.daily_dialog
    • reddit.xlarge
    • reddit.xxlarge
    • safety.offensive
    • safety.sensitive
    • unlikelihood.wizard_of_wikipedia.context_and_label
    • unlikelihood.wizard_of_wikipedia.context
    • unlikelihood.wizard_of_wikipedia.label
    • unlikelihood.convai2.context_and_label
    • unlikelihood.convai2.context
    • unlikelihood.convai2.label
    • unlikelihood.convai2.vocab.alpha.1e-0
    • unlikelihood.convai2.vocab.alpha.1e-1
    • unlikelihood.convai2.vocab.alpha.1e-2
    • unlikelihood.convai2.vocab.alpha.1e-3
    • unlikelihood.eli5.context_and_label
    • unlikelihood.eli5.context
    • unlikelihood.eli5.label
    • wizard_of_wikipedia.end2end_generator


Usage

  • Just import and create a object. That's all.
>>> from openchat import OpenChat
>>> OpenChat(model="blender.medium", device="cpu")



  • Set param device='cuda' If you want to use GPU acceleration.
>>> from openchat import OpenChat
>>> OpenChat(model="blender.medium", device="cuda")



  • Set **kwargs if you want to change decoding options.
    • method (str): one of ["greedy", "beam", "top_k", "nucleus"],
    • num_beams (int): size of beam search
    • top_k (int): K value for top-k sampling
    • top_p: (float): P value for nucleus sampling
    • no_repeat_ngram_size (int): beam search n-gram blocking size for removing repetition,
    • length_penalty (float): length penalty (1.0=None, UP=Longer, DOWN=Shorter)
  • Decoding options must be keyword argument not positional argument.
>>> from openchat import OpenChat
>>> OpenChat(
...    model="blender.medium", 
...    device="cpu", 
...    method="top_k",
...    top_k=20,
...    no_repeat_ngram_size=3,
...    length_penalty=0.6,                            
... )
  • For safety.offensive model, parameter method must be one of ["both", "string-match", "bert"]
>>> from openchat import OpenChat
>>> OpenChat(
...     model="safety.offensive",
...     device="cpu"
...     method="both" # ---> both, string-match, bert
... )



Special Tasks

1. GPT-Neo

  • The GPT-Neo model was released in the EleutherAI/gpt-neo repository.
  • It is a GPT2 like causal language model trained on the Pile dataset.
  • Openchat supports the above Prompt based dialogues via GPT-Neo.
  • Below models provides custom prompt setting. (* means all models)
    • gptneo.*

2. ConvAI2

  • ConvAI2 is one of the most famous conversational AI challenges about a persona.
  • Openchat provides custom persona setting like above image.
  • Below models provides custom perona setting. (* means all models)
    • blender.*
    • dodecathlon.convai2
    • unlikelihood.convai2.*


3. Wizard of Wikipedia

  • Wizard of wikipedia is one of most famous knowledge grounded dialogue dataset.
  • Openchat provides custom topic setting like above image.
  • Below models provides custom topic setting. (* means all models)
    • wizard_of_wikipedia.end2end_generator
    • dodecathlon.wizard_of_wikipedia
    • unlikelihood.wizard_of_wikipedia.*

4. Safety Agents

  • Openchat provides a dialog safety model to help you design conversation model.
  • Below models provides dialog safety features.
    • safety.offensive: offensive words classification
    • safety.sensitive: sensitive topic classification

Update plan

  • Openchat is not a finished, but a growing library.
  • I plan to add the following features in the near future.

Plan by versions.

  • v1.0: Support huggingface transformers for DialoGPT and Blender.
  • v1.1: Support parlai for various dialogue generation tasks.
  • v1.2: Support pytorch-lightning for fine-tuning using GPU & TPU.
  • v1.3: Support deepspeed for huge model inference like Reddit 9.4B.
  • v1.4: Add Retrieval-based dialogue models.
  • v1.5: Add non-parlai models (e.g. Baidu PLATO-2, ...)
  • v1.6: Easy deployment to messengers (e.g. Facebook, Whatsapp, ...)
  • v1.7: Support database (e.g. PostgreSQL, MySQL, ...)

License

Copyright 2021 Hyunwoong Ko.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.