From 180fbb84473bd17d5ba4d8d112f7d8d2cba13631 Mon Sep 17 00:00:00 2001 From: Mukulika <60316606+Mukulikaa@users.noreply.github.com> Date: Thu, 10 Feb 2022 20:13:06 +0530 Subject: [PATCH] Sync the develop branch with master branch (#373) * DC-252 * Update async-audio.md * fix: remove extra ")" from the code (#328) * fix: removed extra ")" from the file (#329) * DC-262, DC-281, DC-280 * Summary Labs Tag Fix * Web-sdk-docs * CustomVocabulary update * Update master.yaml harshad-symbl-circle-master-patch * Update master.yaml * DC-268 * Update messages.md * Update merge. * Move Contributing file. * Removed ci image build files * Latest changes to master * Update getting-started.md * Update getting-started.md * Update master.yaml * Revert "Merge master" * For docs public repo - restore docs folder (#344) Co-authored-by: harshad-symbl * Revert "Merge pull request #341 from symblai/merge-master" This reverts commit d34e4917e12bb6ab5d18d7421b6f08543545b756, reversing changes made to ca9983f88355b9cb52f71e1ad266e7559a1d1369. * Createing master from fix/restore branch * Removes web sdk folder (#349) * Fixes links (#350) * Update docusaurus.config.js * Update docusaurus.config.js * Update docusaurus.config.js * Delete hotjar.js * Delete moesif.js * Delete munchkin.js * Add packages. * Test config. * Trigger build. * Trigger build. * DC-292 + DC-291 + DC-186 + DC-197 + DC-177 (#351) * Adds trackers UI changes (#354) * Tracker UI (#355) * Adds trackers UI changes * Adds image for trackers ui * DC-198, DC-293 (#356) * 02 02 22 (#357) * DC-198, DC-293 * DC-293 * DC-287 + DC-290 (#358) * Adds more changes to Trackers UI (#359) * DC-287 + DC-290 * More changes to Trackers UI * DC-297, DC-59 (#362) * Sample Project Update (#363) * change for exp branch * merge-docs-v1 added for build to workflow * changes for dev and prod dispatch events * Testing changes * Sample Project Update Co-authored-by: harshad-symbl Co-authored-by: Adam Voliva Co-authored-by: amritesh-singh <88492460+amritesh-singh@users.noreply.github.com> * DC-296 (#368) * DC-294 (#361) * DC-294 Adds Offset timestamp in Messages API * Updates sample response * Adds feedback + minor changes * Updates description for variables Co-authored-by: amritesh-singh <88492460+amritesh-singh@users.noreply.github.com> Co-authored-by: Pema <81958801+pema-s@users.noreply.github.com> Co-authored-by: Pankaj Singh <64253632+PankajSingh1010@users.noreply.github.com> Co-authored-by: harshad-symbl <86946393+harshad-symbl@users.noreply.github.com> Co-authored-by: avoliva Co-authored-by: harshad-symbl Co-authored-by: Marcelo Jabali --- .../api-reference/messages.md | 114 +++++++++++------- docs/integrations/agora-sdk-plugin.md | 10 +- docs/javascript-sdk/reference/reference.md | 16 +-- .../get-realtime-transcription-js-sdk.md | 2 +- .../tutorials/push-audio-get-realtime-data.md | 2 +- docs/python-sdk/python-sdk-reference.md | 2 +- .../connect-to-zoom-with-telephony-api.md | 2 +- .../get-live-transcription-telephony-api.md | 4 +- 8 files changed, 93 insertions(+), 59 deletions(-) diff --git a/docs/conversation-api/api-reference/messages.md b/docs/conversation-api/api-reference/messages.md index 104b4e98..46fbc10b 100644 --- a/docs/conversation-api/api-reference/messages.md +++ b/docs/conversation-api/api-reference/messages.md @@ -10,22 +10,15 @@ import TabItem from '@theme/TabItem'; --- -The Messages API returns a list of all the messages in a conversation. You can use this for providing **Speech to Text data (also known as transcription sometimes)** for video conference, meeting or telephone call. +The Messages API returns a list of all the messages in a conversation. You can use this for getting **Speech to Text** data (also known as transcription) for video conference, meeting or a telephone call. -Here message refer to a continuous sentence spoken by a speaker. +Here, the message refers to a continuous sentence by a speaker. -### Word-level Confidence Score LABS +#### Sentiment Analysis in messages BETA -This API provides word-level confidence score that represents the confidence level of individual words within the message or transcript. The confidence score shows the relevancy of the word in the transcript which means higher the word-level confidence score, the more relevant it is to the message. +You can enable sentiment analysis over each message being spoken in the conversation. -When you pass `verbose=true`, the word-level confidence score is by default returned in the response body. - - -### Sentiment Analysis in messages BETA - -Here you can enable sentiment analysis over each message which is being spoken in the conversation. - -All you need to do is pass `sentiment=true` as a query parameter. [Read more about it](/docs/concepts/sentiment-analysis). +To do this, pass the query parameter `sentiment=true`. Read more about Sentiment Analysis [here](/docs/concepts/sentiment-analysis). ### HTTP Request @@ -33,8 +26,6 @@ All you need to do is pass `sentiment=true` as a query parameter. [Read more abo ### Example API Call - - :::info Before using the Conversation API you must get the authentication token (`AUTH_TOKEN`) from [our authentication process](/docs/developer-tools/authentication). ::: @@ -146,11 +137,13 @@ Parameter | Required | Value |Description | }, "startTime": "2020-07-10T11:16:21.024Z", "endTime": "2020-07-10T11:16:26.724Z", + "timeOffset": 5.9, + "duration": 1, "conversationId": "6749556955938816", "phrases": [ { "type": "action_phrase", - "text": "$69.99 per month" + "text": "$69.99 per month", } ], "sentiment": { @@ -164,49 +157,69 @@ Parameter | Required | Value |Description | "word": "Best", "startTime": "2020-08-18T11:10:14.536Z", "endTime": "2020-08-18T11:10:15.536Z", - "score": 0.91 + "score": 0.91, + "timeOffset": 5.9, + "duration": 0.2 + }, { "word": "package", "startTime": "2020-08-18T11:10:16.536Z", "endTime": "2020-08-18T11:10:17.536Z", - "score": 0.80 + "score": 0.80, + "timeOffset": 6.1, + "duration": 0.1 + }, { "word": "for", "startTime": "2020-08-18T11:10:18.536Z", "endTime": "2020-08-18T11:10:19.536Z", - "score": 0.79 + "score": 0.68, + "timeOffset": 6.2, + "duration": 0.1 + }, { "word": "you", "startTime": "2020-08-18T11:10:20.536Z", "endTime": "2020-08-18T11:10:22.536Z", - "score": 0.85 + "score": 0.68, + "timeOffset": 6.3, + "duration": 0.3 + }, { "word": "is", "startTime": "2020-08-18T11:10:22.536Z", "endTime": "2020-08-18T11:10:25.536Z", - "score": 0.89 + "score": 0.68, + "timeOffset": 6.6, + "duration": 0.3 }, { "word": "$69.99", "startTime": "2020-08-18T11:10:25.536Z", "endTime": "2020-08-18T11:10:27.536Z", - "score": 0.86 + "score": 0.68, + "timeOffset": 6.67, + "duration": 0.3 }, { "word": "per", "startTime": "2020-08-18T11:10:27.536Z", "endTime": "2020-08-18T11:10:29.536Z", - "score": 0.82 + "score": 0.67, + "timeOffset": 6.6, + "duration": 0.4 }, { "word": "month.", "startTime": "2020-08-18T11:10:30.536Z", "endTime": "2020-08-18T11:10:32.536Z", - "score": 0.90 + "score": 0.67, + "timeOffset": 6.8, + "duration": 0.5 }] }, { @@ -218,11 +231,13 @@ Parameter | Required | Value |Description | } "startTime": "2020-08-18T11:11:14.536Z", "endTime": "2020-08-18T11:11:18.536Z", + "timeOffset": 15.27, + "duration": 1.23, "conversationId": "5139780136337408", "phrases": [], "sentiment": { "polarity": { - "score": 0.2 + "score": 0.2, }, "suggested": "neutral" }, @@ -230,32 +245,43 @@ Parameter | Required | Value |Description | { "word": "Okay,", "startTime": "2020-08-18T11:11:14.536Z", - "endTime": "2020-08-18T11:11:14.936Z" - "score": 0.91 + "endTime": "2020-08-18T11:11:14.936Z", + "score": 0.91, + "timeOffset": 15.25, + "duration": 0.59 + }, { "word": "Where", "startTime": "2020-08-18T11:11:14.936Z", - "endTime": "2020-08-18T11:11:15.436Z" - "score": 0.91 + "endTime": "2020-08-18T11:11:15.436Z", + "score": 0.91, + "timeOffset": 15.25, + "duration": 0.59 }, { "word": "is", "startTime": "2020-08-18T11:11:16.236Z", - "endTime": "2020-08-18T11:11:16.536Z" - "score": 0.88 + "endTime": "2020-08-18T11:11:16.536Z", + "score": 0.88, + "timeOffset": 15.25, + "duration": 0.58 }, { "word": "the", "startTime": "2020-08-18T11:11:16.536Z", - "endTime": "2020-08-18T11:11:16.936Z" - "score": 0.85 + "endTime": "2020-08-18T11:11:16.936Z", + "score": 0.85, + "timeOffset": 15.25, + "duration": 0.58 }, { "word": "file?", "startTime": "2020-08-18T11:11:16.936Z", - "endTime": "2020-08-18T11:11:17.236Z" - "score": 0.89 + "endTime": "2020-08-18T11:11:17.236Z", + "score": 0.89, + "timeOffset": 15.25, + "duration": 0.59 } ] } @@ -265,12 +291,14 @@ Parameter | Required | Value |Description | Field | Description ---------- | ------- | -```id``` | Unique message identifier. -```text``` | Message text. -```from``` | User object with name and email. -```startTime``` | DateTime value. -```endTime``` | DateTime value. -```conversationId``` | Unique conversation identifier. -```words``` | Words object with properties `word`, `startTime`, `endTime` and `score`. The `score` represents the word level confidence score. The value that is accepted for the data type is float. -```phrases``` | It shows the most important action phrases in each sentence. It's enabled when you pass `detectPhrases=true` during submiting the request in Async and Websocket API. -```sentiment```| Shows the sentiment polarity(intensity of negativity or positivity of a sentence) and suggested sentiment type (positive, negative and neutral). \ No newline at end of file +```id``` | Unique message identifier.| +```text``` | Message text.| +```from``` | User object with name and email.| +```startTime``` | DateTime value.| +```endTime``` | DateTime value.| +```timeOffset``` | Returned as a float value measuring in seconds, up to 2 decimal points. It indicates the seconds elapsed since the start of the conversation. It is returned at the sentence level as well as the word level.
timeOffset= startTime (of current sentence/ word) - startTime (of the very first sentence/ word in the conversation).
This variable is currently in Labs.| +```duration``` | Returned as a float value measuring in seconds, upto 2 decimal points. It indicates for how long the sentence or word was spoken. It is returned at the sentence level as well as the word level.
`duration= endTime (of current sentence/ word) - startTime (of current sentence/ word)`.
This variable is currently in Labs. +```conversationId``` | Unique conversation identifier. Read more about the Conversation ID [here](/docs/api-reference/getting-conversation-intelligence#what-is-a-conversation-id). | +```words``` | Words object with properties `word`, `startTime`, `endTime` and `score`. The `score` is the word level confidence score that represents the confidence level of individual words within the transcript. The `score` shows the relevancy of the word in the transcript. Higher the word-level confidence score, the more relevant it is to the transcript message. When you pass `verbose=true`, the word-level confidence score is by default returned.
Note that a processed `text` conversation will not return any confidence score since it is already in the transcript form. `words` also return the `timeOffset` and `duration` variables. The word level confidence score is currently in Labs. | +```phrases``` | It shows the most important action phrases in each sentence. It's enabled when you pass `detectPhrases=true` during submiting the request in Async and Websocket API.| +```sentiment```| Shows the sentiment polarity(intensity of negativity or positivity of a sentence) and suggested sentiment type (positive, negative and neutral). | diff --git a/docs/integrations/agora-sdk-plugin.md b/docs/integrations/agora-sdk-plugin.md index 6bbb2806..ffa65ca2 100644 --- a/docs/integrations/agora-sdk-plugin.md +++ b/docs/integrations/agora-sdk-plugin.md @@ -782,7 +782,13 @@ public class MainActivity extends AppCompatActivity implements io.agora.rtc2.IMe } } ``` -### API Reference + +## Sample Project +--- +The following sample project provides you an Android mobile app using the Agora Video SDK and the Symbl.ai Extension and it can be used as a reference. Follow the instructions in the README file for setting up, configuring and running the sample mobile app in your own device. +[Sample Android App Project](https://github.com/symblai/symbl-agora-Android-app). + +## API Reference --- -Find comprehensive information about our REST APIs in the [API Reference](https://docs.symbl.ai/docs/api-reference/getting-started) section. +Find comprehensive information about our REST APIs in the [API Reference](/docs/api-reference/getting-started) section. diff --git a/docs/javascript-sdk/reference/reference.md b/docs/javascript-sdk/reference/reference.md index 6b4a1016..4dd53ba2 100644 --- a/docs/javascript-sdk/reference/reference.md +++ b/docs/javascript-sdk/reference/reference.md @@ -54,7 +54,7 @@ Connects to the [Telephony API](/docs/telephony/introduction) endpoint using the Name | Description -----|------------ -`config` | Options specified for the [Telephony API Configuration Object](http://docs.symbl.ai/docs/telephony-api/api-reference#request-parameters). +`config` | Options specified for the [Telephony API Configuration Object](/docs/telephony-api/api-reference#request-parameters). #### Returns @@ -124,13 +124,13 @@ sdk.stopEndpoint({ ```startRealtimeRequest ( options)``` -Connects to a [Streaming API](/docs/streamingapi/overview/introduction) Web Socket endpoint using the provided configuration options. +Connects to a [Streaming API](/docs/streamingapi/introduction) Web Socket endpoint using the provided configuration options. #### Parameters Name | Description -----|------------ -`options` | Options specified for the [Streaming API Configuration Object](https://docs.symbl.ai/docs/streaming-api/api-reference#request-parameters). +`options` | Options specified for the [Streaming API Configuration Object](/docs/streaming-api/api-reference#request-parameters). #### Returns @@ -138,7 +138,7 @@ A Promise which is resolved once real-time request has been established. #### Event Handlers -View the [Event Handlers](##event-handlers-1) section below to view which event handlers can be passed to the real-time connection. +View the [Event Handlers](#event-handlers-1) section below to view which event handlers can be passed to the real-time connection. #### Code Example @@ -174,7 +174,7 @@ Subscribes to an existing connection which will fire a callback for every event Name | Description -----|------------ -`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startRealtimeRequest) or [startEndpoint](#startendpoint). +`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startrealtimerequest) or [startEndpoint](#startendpoint). `callback` | A callback method which will be called on for every new event. #### Code Example @@ -232,7 +232,7 @@ SpeakerEvent is a type of event Symbl can accept that provides information about Name | Description -----|------------ -`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startRealtimeRequest) or [startEndpoint](#startendpoint). +`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startrealtimerequest) or [startEndpoint](#startendpoint). `event` | An event (such as a [SpeakerEvent](/docs/javascript-sdk/code-snippets/active-speaker-events/#speaker-event)) which is the event to be pushed onto the connection. `callback` | A callback method which will be called on for every new event. @@ -262,7 +262,7 @@ sdk.pushEventOnConnection( ## Event Handlers -When connecting using [`startRealtimeRequest`](#startRealtimeRequest), you can pass various handlers in the configuration options which be called if the specific event attached to the handler is fired. +When connecting using [`startRealtimeRequest`](#startrealtimerequest), you can pass various handlers in the configuration options which be called if the specific event attached to the handler is fired. #### Code Example @@ -484,7 +484,7 @@ This callback provides you with any of the detected topics in real-time as they ### onTrackerResponse -This callback provides you with any of the detected trackers in real-time as they are detected. As with the [`onMessageCallback`](#onmessagecallback) this would also return every tracker in case of multiple streams. +This callback provides you with any of the detected trackers in real-time as they are detected. As with the [`onMessageCallback`](#onMessageCallback) this would also return every tracker in case of multiple streams. #### onTrackerResponse JSON Response Example diff --git a/docs/javascript-sdk/tutorials/get-realtime-transcription-js-sdk.md b/docs/javascript-sdk/tutorials/get-realtime-transcription-js-sdk.md index e5b5b8a3..f51aeaa4 100644 --- a/docs/javascript-sdk/tutorials/get-realtime-transcription-js-sdk.md +++ b/docs/javascript-sdk/tutorials/get-realtime-transcription-js-sdk.md @@ -246,7 +246,7 @@ sdk.pushEventOnConnection(connectionId, speakerEvent.toJSON(), (err) => { }); ``` -This example just touches the surface of what you can do with our Streaming API. If you would like to learn more about it you can visit the [Streaming API documentation](/docs/streamingapi/overview/introduction). +This example just touches the surface of what you can do with our Streaming API. If you would like to learn more about it you can visit the [Streaming API documentation](/docs/streamingapi/introduction). ## Full Code Example diff --git a/docs/javascript-sdk/tutorials/push-audio-get-realtime-data.md b/docs/javascript-sdk/tutorials/push-audio-get-realtime-data.md index 5934842f..34ab878e 100644 --- a/docs/javascript-sdk/tutorials/push-audio-get-realtime-data.md +++ b/docs/javascript-sdk/tutorials/push-audio-get-realtime-data.md @@ -36,7 +36,7 @@ In this guide you will learn the following: * [Handlers (handlers)](#handlers-handlers) * [Full Configuration Object](#full-configuration-object) * [Handle the audio stream](#handle-the-audio-stream) -* [Process speech using device's microphone](#process-speech-using-devices-microphone) +* [Process speech using device's microphone](#process-speech-using-the-devices-microphone) * [Test](#test) * [Grabbing the Conversation ID](#grabbing-the-conversation-id) * [Full Code Sample](#full-code-sample) diff --git a/docs/python-sdk/python-sdk-reference.md b/docs/python-sdk/python-sdk-reference.md index 989f49b4..2b5232e2 100644 --- a/docs/python-sdk/python-sdk-reference.md +++ b/docs/python-sdk/python-sdk-reference.md @@ -217,7 +217,7 @@ To see an example of the usage of `put_members` functionality, go to out [GitHub ### conversation_object.put_speakers_events(parameters={}) -`parameters`:- (mandatory) takes a dictionary which contains `speakerEvents`. For list of parameters accepted, see [Speaker Events Object](https://docs.symbl.ai/docs/conversation-api/speaker-events/#speaker-event-object) page. +`parameters`:- (mandatory) takes a dictionary which contains `speakerEvents`. For list of parameters accepted, see [Speaker Events Object](/docs/conversation-api/speaker-events/#speaker-event-object) page. This API provides the functionality to update Speakers in a conversation after it has been processed. diff --git a/docs/telephony/tutorials/connect-to-zoom-with-telephony-api.md b/docs/telephony/tutorials/connect-to-zoom-with-telephony-api.md index 926a2e23..d3a50639 100644 --- a/docs/telephony/tutorials/connect-to-zoom-with-telephony-api.md +++ b/docs/telephony/tutorials/connect-to-zoom-with-telephony-api.md @@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem'; This guide uses a **PSTN** connection to connect to Zoom. **PSTN** audio quality maxes out to 8KHz. You can also use a **[SIP-based connection](/docs/concepts/pstn-and-sip#sip-session-initiation-protocol)**, which captures audio at 16KHz and above. ::: -[Symbl’s Telephony API](https://docs.symbl.ai/?shell#telephony-api) allows you to connect to any conference call system using PSTN or SIP networks. In this guide, we will walk you through how to get a live transcription and real-time AI insights, such as [follow-ups](/docs/concepts/follow-ups), [action items](/docs/concepts/action-items), [topics](/docs/concepts/topics) and [questions](/docs/conversation-api/questions), of a Zoom call using a PSTN connection. This application uses the Symbl Javascript SDK which requires the `@symblai/symbl-js` node package. You must have an active Zoom call (no one has to be in it but yourself) and whatever you speak in the Zoom call will be taken by our API and processed for conversational insights. +[Symbl’s Telephony API](/docs/telephony/introduction) allows you to connect to any conference call system using PSTN or SIP networks. In this guide, we will walk you through how to get a live transcription and real-time AI insights, such as [follow-ups](/docs/concepts/follow-ups), [action items](/docs/concepts/action-items), [topics](/docs/concepts/topics) and [questions](/docs/conversation-api/questions), of a Zoom call using a PSTN connection. This application uses the Symbl Javascript SDK which requires the `@symblai/symbl-js` node package. You must have an active Zoom call (no one has to be in it but yourself) and whatever you speak in the Zoom call will be taken by our API and processed for conversational insights. :::info You must make sure your Zoom call allows phone dial-in for this example to work correctly. diff --git a/docs/telephony/tutorials/get-live-transcription-telephony-api.md b/docs/telephony/tutorials/get-live-transcription-telephony-api.md index 5f9e42d2..3f23a631 100644 --- a/docs/telephony/tutorials/get-live-transcription-telephony-api.md +++ b/docs/telephony/tutorials/get-live-transcription-telephony-api.md @@ -13,7 +13,7 @@ Get a live transcription in your Node.js application by making a call to a valid This application uses the Symbl Javascript SDK which requires the `symbl-node` node package. -Making a phone call is also the quickest way to test [Symbl’s Telephony API](https://docs.symbl.ai/?shell#telephony-api). It can make an outbound call to a phone number using a traditional public switched telephony network [(PSTN)](https://en.wikipedia.org/wiki/Public_switched_telephone_network), any [SIP trunks](https://en.wikipedia.org/wiki/SIP_trunking), or SIP endpoints that can be accessed over the internet using a SIP URI. +Making a phone call is also the quickest way to test [Symbl’s Telephony API](/docs/telephony/introduction). It can make an outbound call to a phone number using a traditional public switched telephony network [(PSTN)](https://en.wikipedia.org/wiki/Public_switched_telephone_network), any [SIP trunks](https://en.wikipedia.org/wiki/SIP_trunking), or SIP endpoints that can be accessed over the internet using a SIP URI. ### Contents @@ -182,7 +182,7 @@ setTimeout(async () => { }, 60000); // Change the 60000 with higher value if you want this to continue for more time. ``` -The `stopEndpoint` will return an updated `connection` object which will have the `conversationId` in the response. You can use `conversationId` to fetch the results even after the call using the [Conversation API](https://docs.symbl.ai/#conversation-api). +The `stopEndpoint` will return an updated `connection` object which will have the `conversationId` in the response. You can use `conversationId` to fetch the results even after the call using the [Conversation API](/docs/conversation-api/introduction). ## Code Example