Skip to content

Commit

Permalink
Sync the develop branch with master branch (#373)
Browse files Browse the repository at this point in the history
* DC-252

* Update async-audio.md

* fix: remove extra ")" from the code (#328)

* fix: removed extra ")" from the file (#329)

* DC-262, DC-281, DC-280

* Summary Labs Tag Fix

* Web-sdk-docs

* CustomVocabulary update

* Update master.yaml

harshad-symbl-circle-master-patch

* Update master.yaml

* DC-268

* Update messages.md

* Update merge.

* Move Contributing file.

* Removed ci image build files

* Latest changes to master

* Update getting-started.md

* Update getting-started.md

* Update master.yaml

* Revert "Merge master"

* For docs public repo - restore docs folder (#344)

Co-authored-by: harshad-symbl <[email protected]>

* Revert "Merge pull request #341 from symblai/merge-master"

This reverts commit d34e491, reversing
changes made to ca9983f.

* Createing master from fix/restore branch

* Removes web sdk folder (#349)

* Fixes links (#350)

* Update docusaurus.config.js

* Update docusaurus.config.js

* Update docusaurus.config.js

* Delete hotjar.js

* Delete moesif.js

* Delete munchkin.js

* Add packages.

* Test config.

* Trigger build.

* Trigger build.

* DC-292 + DC-291 + DC-186 + DC-197 + DC-177 (#351)

* Adds trackers UI changes (#354)

* Tracker UI (#355)

* Adds trackers UI changes

* Adds image for trackers ui

* DC-198, DC-293 (#356)

* 02 02 22 (#357)

* DC-198, DC-293

* DC-293

* DC-287 + DC-290 (#358)

* Adds more changes to Trackers UI (#359)

* DC-287 + DC-290

* More changes to Trackers UI

* DC-297, DC-59 (#362)

* Sample Project Update (#363)

* change for exp branch

* merge-docs-v1 added for build to workflow

* changes for dev and prod dispatch events

* Testing changes

* Sample Project Update

Co-authored-by: harshad-symbl <[email protected]>
Co-authored-by: Adam Voliva <[email protected]>
Co-authored-by: amritesh-singh <[email protected]>

* DC-296 (#368)

* DC-294 (#361)

* DC-294

Adds Offset timestamp in Messages API

* Updates sample response

* Adds feedback + minor changes

* Updates description for variables

Co-authored-by: amritesh-singh <[email protected]>
Co-authored-by: Pema <[email protected]>
Co-authored-by: Pankaj Singh <[email protected]>
Co-authored-by: harshad-symbl <[email protected]>
Co-authored-by: avoliva <[email protected]>
Co-authored-by: harshad-symbl <[email protected]>
Co-authored-by: Marcelo Jabali <[email protected]>
  • Loading branch information
8 people authored Feb 10, 2022
1 parent 2a17550 commit 180fbb8
Show file tree
Hide file tree
Showing 8 changed files with 93 additions and 59 deletions.
114 changes: 71 additions & 43 deletions docs/conversation-api/api-reference/messages.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,31 +10,22 @@ import TabItem from '@theme/TabItem';

---

The Messages API returns a list of all the messages in a conversation. You can use this for providing **Speech to Text data (also known as transcription sometimes)** for video conference, meeting or telephone call.
The Messages API returns a list of all the messages in a conversation. You can use this for getting **Speech to Text** data (also known as transcription) for video conference, meeting or a telephone call.

Here message refer to a continuous sentence spoken by a speaker.
Here, the message refers to a continuous sentence by a speaker.

### Word-level Confidence Score <font color="orange"> LABS</font>
#### Sentiment Analysis in messages <font color="orange"> BETA</font>

This API provides word-level confidence score that represents the confidence level of individual words within the message or transcript. The confidence score shows the relevancy of the word in the transcript which means higher the word-level confidence score, the more relevant it is to the message.
You can enable sentiment analysis over each message being spoken in the conversation.

When you pass `verbose=true`, the word-level confidence score is by default returned in the response body.


### Sentiment Analysis in messages <font color="orange"> BETA</font>

Here you can enable sentiment analysis over each message which is being spoken in the conversation.

All you need to do is pass `sentiment=true` as a query parameter. [Read more about it](/docs/concepts/sentiment-analysis).
To do this, pass the query parameter `sentiment=true`. Read more about Sentiment Analysis [here](/docs/concepts/sentiment-analysis).

### HTTP Request

`GET https://api.symbl.ai/v1/conversations/{conversationId}/messages`

### Example API Call



:::info
Before using the Conversation API you must get the authentication token (`AUTH_TOKEN`) from [our authentication process](/docs/developer-tools/authentication).
:::
Expand Down Expand Up @@ -146,11 +137,13 @@ Parameter | Required | Value |Description |
},
"startTime": "2020-07-10T11:16:21.024Z",
"endTime": "2020-07-10T11:16:26.724Z",
"timeOffset": 5.9,
"duration": 1,
"conversationId": "6749556955938816",
"phrases": [
{
"type": "action_phrase",
"text": "$69.99 per month"
"text": "$69.99 per month",
}
],
"sentiment": {
Expand All @@ -164,49 +157,69 @@ Parameter | Required | Value |Description |
"word": "Best",
"startTime": "2020-08-18T11:10:14.536Z",
"endTime": "2020-08-18T11:10:15.536Z",
"score": 0.91
"score": 0.91,
"timeOffset": 5.9,
"duration": 0.2

},
{
"word": "package",
"startTime": "2020-08-18T11:10:16.536Z",
"endTime": "2020-08-18T11:10:17.536Z",
"score": 0.80
"score": 0.80,
"timeOffset": 6.1,
"duration": 0.1

},
{
"word": "for",
"startTime": "2020-08-18T11:10:18.536Z",
"endTime": "2020-08-18T11:10:19.536Z",
"score": 0.79
"score": 0.68,
"timeOffset": 6.2,
"duration": 0.1

},
{
"word": "you",
"startTime": "2020-08-18T11:10:20.536Z",
"endTime": "2020-08-18T11:10:22.536Z",
"score": 0.85
"score": 0.68,
"timeOffset": 6.3,
"duration": 0.3

},
{
"word": "is",
"startTime": "2020-08-18T11:10:22.536Z",
"endTime": "2020-08-18T11:10:25.536Z",
"score": 0.89
"score": 0.68,
"timeOffset": 6.6,
"duration": 0.3
},
{
"word": "$69.99",
"startTime": "2020-08-18T11:10:25.536Z",
"endTime": "2020-08-18T11:10:27.536Z",
"score": 0.86
"score": 0.68,
"timeOffset": 6.67,
"duration": 0.3
},
{
"word": "per",
"startTime": "2020-08-18T11:10:27.536Z",
"endTime": "2020-08-18T11:10:29.536Z",
"score": 0.82
"score": 0.67,
"timeOffset": 6.6,
"duration": 0.4
},
{
"word": "month.",
"startTime": "2020-08-18T11:10:30.536Z",
"endTime": "2020-08-18T11:10:32.536Z",
"score": 0.90
"score": 0.67,
"timeOffset": 6.8,
"duration": 0.5
}]
},
{
Expand All @@ -218,44 +231,57 @@ Parameter | Required | Value |Description |
}
"startTime": "2020-08-18T11:11:14.536Z",
"endTime": "2020-08-18T11:11:18.536Z",
"timeOffset": 15.27,
"duration": 1.23,
"conversationId": "5139780136337408",
"phrases": [],
"sentiment": {
"polarity": {
"score": 0.2
"score": 0.2,
},
"suggested": "neutral"
},
"words": [
{
"word": "Okay,",
"startTime": "2020-08-18T11:11:14.536Z",
"endTime": "2020-08-18T11:11:14.936Z"
"score": 0.91
"endTime": "2020-08-18T11:11:14.936Z",
"score": 0.91,
"timeOffset": 15.25,
"duration": 0.59

},
{
"word": "Where",
"startTime": "2020-08-18T11:11:14.936Z",
"endTime": "2020-08-18T11:11:15.436Z"
"score": 0.91
"endTime": "2020-08-18T11:11:15.436Z",
"score": 0.91,
"timeOffset": 15.25,
"duration": 0.59
},
{
"word": "is",
"startTime": "2020-08-18T11:11:16.236Z",
"endTime": "2020-08-18T11:11:16.536Z"
"score": 0.88
"endTime": "2020-08-18T11:11:16.536Z",
"score": 0.88,
"timeOffset": 15.25,
"duration": 0.58
},
{
"word": "the",
"startTime": "2020-08-18T11:11:16.536Z",
"endTime": "2020-08-18T11:11:16.936Z"
"score": 0.85
"endTime": "2020-08-18T11:11:16.936Z",
"score": 0.85,
"timeOffset": 15.25,
"duration": 0.58
},
{
"word": "file?",
"startTime": "2020-08-18T11:11:16.936Z",
"endTime": "2020-08-18T11:11:17.236Z"
"score": 0.89
"endTime": "2020-08-18T11:11:17.236Z",
"score": 0.89,
"timeOffset": 15.25,
"duration": 0.59
}
]
}
Expand All @@ -265,12 +291,14 @@ Parameter | Required | Value |Description |
Field | Description
---------- | ------- |
```id``` | Unique message identifier.
```text``` | Message text.
```from``` | User object with name and email.
```startTime``` | DateTime value.
```endTime``` | DateTime value.
```conversationId``` | Unique conversation identifier.
```words``` | Words object with properties `word`, `startTime`, `endTime` and `score`. The `score` represents the word level confidence score. The value that is accepted for the data type is float.
```phrases``` | It shows the most important action phrases in each sentence. It's enabled when you pass `detectPhrases=true` during submiting the request in Async and Websocket API.
```sentiment```| Shows the sentiment polarity(intensity of negativity or positivity of a sentence) and suggested sentiment type (positive, negative and neutral).
```id``` | Unique message identifier.|
```text``` | Message text.|
```from``` | User object with name and email.|
```startTime``` | DateTime value.|
```endTime``` | DateTime value.|
```timeOffset``` | Returned as a float value measuring in seconds, up to 2 decimal points. It indicates the seconds elapsed since the start of the conversation. It is returned at the sentence level as well as the word level.<br/> timeOffset= startTime (of current sentence/ word) - startTime (of the very first sentence/ word in the conversation).<br/> This variable is currently in <font color="orange"> Labs</font>.|
```duration``` | Returned as a float value measuring in seconds, upto 2 decimal points. It indicates for how long the sentence or word was spoken. It is returned at the sentence level as well as the word level.<br/> `duration= endTime (of current sentence/ word) - startTime (of current sentence/ word)`.<br/> This variable is currently in <font color="orange"> Labs</font>.
```conversationId``` | Unique conversation identifier. Read more about the Conversation ID [here](/docs/api-reference/getting-conversation-intelligence#what-is-a-conversation-id). |
```words``` | Words object with properties `word`, `startTime`, `endTime` and `score`. The `score` is the word level confidence score that represents the confidence level of individual words within the transcript. The `score` shows the relevancy of the word in the transcript. Higher the word-level confidence score, the more relevant it is to the transcript message. When you pass `verbose=true`, the word-level confidence score is by default returned. <br/> Note that a processed `text` conversation will not return any confidence score since it is already in the transcript form. `words` also return the `timeOffset` and `duration` variables. The word level confidence score is currently in <font color="orange"> Labs</font>. |
```phrases``` | It shows the most important action phrases in each sentence. It's enabled when you pass `detectPhrases=true` during submiting the request in Async and Websocket API.|
```sentiment```| Shows the sentiment polarity(intensity of negativity or positivity of a sentence) and suggested sentiment type (positive, negative and neutral). |
10 changes: 8 additions & 2 deletions docs/integrations/agora-sdk-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -782,7 +782,13 @@ public class MainActivity extends AppCompatActivity implements io.agora.rtc2.IMe
}
}
```
### API Reference
## Sample Project
---
The following sample project provides you an Android mobile app using the Agora Video SDK and the Symbl.ai Extension and it can be used as a reference. Follow the instructions in the README file for setting up, configuring and running the sample mobile app in your own device.
[Sample Android App Project](https://github.com/symblai/symbl-agora-Android-app).
## API Reference
---
Find comprehensive information about our REST APIs in the [API Reference](https://docs.symbl.ai/docs/api-reference/getting-started) section.
Find comprehensive information about our REST APIs in the [API Reference](/docs/api-reference/getting-started) section.
16 changes: 8 additions & 8 deletions docs/javascript-sdk/reference/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Connects to the [Telephony API](/docs/telephony/introduction) endpoint using the

Name | Description
-----|------------
`config` | Options specified for the [Telephony API Configuration Object](http://docs.symbl.ai/docs/telephony-api/api-reference#request-parameters).
`config` | Options specified for the [Telephony API Configuration Object](/docs/telephony-api/api-reference#request-parameters).

#### Returns

Expand Down Expand Up @@ -124,21 +124,21 @@ sdk.stopEndpoint({

```startRealtimeRequest (<Streaming API Configuration Object> options)```

Connects to a [Streaming API](/docs/streamingapi/overview/introduction) Web Socket endpoint using the provided configuration options.
Connects to a [Streaming API](/docs/streamingapi/introduction) Web Socket endpoint using the provided configuration options.

#### Parameters

Name | Description
-----|------------
`options` | Options specified for the [Streaming API Configuration Object](https://docs.symbl.ai/docs/streaming-api/api-reference#request-parameters).
`options` | Options specified for the [Streaming API Configuration Object](/docs/streaming-api/api-reference#request-parameters).

#### Returns

A Promise which is resolved once real-time request has been established.

#### Event Handlers

View the [Event Handlers](##event-handlers-1) section below to view which event handlers can be passed to the real-time connection.
View the [Event Handlers](#event-handlers-1) section below to view which event handlers can be passed to the real-time connection.

#### Code Example

Expand Down Expand Up @@ -174,7 +174,7 @@ Subscribes to an existing connection which will fire a callback for every event

Name | Description
-----|------------
`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startRealtimeRequest) or [startEndpoint](#startendpoint).
`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startrealtimerequest) or [startEndpoint](#startendpoint).
`callback` | A callback method which will be called on for every new event.

#### Code Example
Expand Down Expand Up @@ -232,7 +232,7 @@ SpeakerEvent is a type of event Symbl can accept that provides information about

Name | Description
-----|------------
`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startRealtimeRequest) or [startEndpoint](#startendpoint).
`connectionId` | You receive the connection ID after connecting with [startRealtimeRequest](#startrealtimerequest) or [startEndpoint](#startendpoint).
`event` | An event (such as a [SpeakerEvent](/docs/javascript-sdk/code-snippets/active-speaker-events/#speaker-event)) which is the event to be pushed onto the connection.
`callback` | A callback method which will be called on for every new event.

Expand Down Expand Up @@ -262,7 +262,7 @@ sdk.pushEventOnConnection(

## Event Handlers

When connecting using [`startRealtimeRequest`](#startRealtimeRequest), you can pass various handlers in the configuration options which be called if the specific event attached to the handler is fired.
When connecting using [`startRealtimeRequest`](#startrealtimerequest), you can pass various handlers in the configuration options which be called if the specific event attached to the handler is fired.

#### Code Example

Expand Down Expand Up @@ -484,7 +484,7 @@ This callback provides you with any of the detected topics in real-time as they

### onTrackerResponse

This callback provides you with any of the detected trackers in real-time as they are detected. As with the [`onMessageCallback`](#onmessagecallback) this would also return every tracker in case of multiple streams.
This callback provides you with any of the detected trackers in real-time as they are detected. As with the [`onMessageCallback`](#onMessageCallback) this would also return every tracker in case of multiple streams.

#### onTrackerResponse JSON Response Example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ sdk.pushEventOnConnection(connectionId, speakerEvent.toJSON(), (err) => {
});
```

This example just touches the surface of what you can do with our Streaming API. If you would like to learn more about it you can visit the [Streaming API documentation](/docs/streamingapi/overview/introduction).
This example just touches the surface of what you can do with our Streaming API. If you would like to learn more about it you can visit the [Streaming API documentation](/docs/streamingapi/introduction).

## Full Code Example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In this guide you will learn the following:
* [Handlers (handlers)](#handlers-handlers)
* [Full Configuration Object](#full-configuration-object)
* [Handle the audio stream](#handle-the-audio-stream)
* [Process speech using device's microphone](#process-speech-using-devices-microphone)
* [Process speech using device's microphone](#process-speech-using-the-devices-microphone)
* [Test](#test)
* [Grabbing the Conversation ID](#grabbing-the-conversation-id)
* [Full Code Sample](#full-code-sample)
Expand Down
2 changes: 1 addition & 1 deletion docs/python-sdk/python-sdk-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ To see an example of the usage of `put_members` functionality, go to out [GitHub

### conversation_object.put_speakers_events(parameters={})

`parameters`:- (mandatory) takes a dictionary which contains `speakerEvents`. For list of parameters accepted, see [Speaker Events Object](https://docs.symbl.ai/docs/conversation-api/speaker-events/#speaker-event-object) page.
`parameters`:- (mandatory) takes a dictionary which contains `speakerEvents`. For list of parameters accepted, see [Speaker Events Object](/docs/conversation-api/speaker-events/#speaker-event-object) page.

This API provides the functionality to update Speakers in a conversation after it has been processed.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';
This guide uses a **PSTN** connection to connect to Zoom. **PSTN** audio quality maxes out to 8KHz. You can also use a **[SIP-based connection](/docs/concepts/pstn-and-sip#sip-session-initiation-protocol)**, which captures audio at 16KHz and above.
:::

[Symbl’s Telephony API](https://docs.symbl.ai/?shell#telephony-api) allows you to connect to any conference call system using PSTN or SIP networks. In this guide, we will walk you through how to get a live transcription and real-time AI insights, such as [follow-ups](/docs/concepts/follow-ups), [action items](/docs/concepts/action-items), [topics](/docs/concepts/topics) and [questions](/docs/conversation-api/questions), of a Zoom call using a PSTN connection. This application uses the Symbl Javascript SDK which requires the `@symblai/symbl-js` node package. You must have an active Zoom call (no one has to be in it but yourself) and whatever you speak in the Zoom call will be taken by our API and processed for conversational insights.
[Symbl’s Telephony API](/docs/telephony/introduction) allows you to connect to any conference call system using PSTN or SIP networks. In this guide, we will walk you through how to get a live transcription and real-time AI insights, such as [follow-ups](/docs/concepts/follow-ups), [action items](/docs/concepts/action-items), [topics](/docs/concepts/topics) and [questions](/docs/conversation-api/questions), of a Zoom call using a PSTN connection. This application uses the Symbl Javascript SDK which requires the `@symblai/symbl-js` node package. You must have an active Zoom call (no one has to be in it but yourself) and whatever you speak in the Zoom call will be taken by our API and processed for conversational insights.

:::info
You must make sure your Zoom call allows phone dial-in for this example to work correctly.
Expand Down
Loading

0 comments on commit 180fbb8

Please sign in to comment.