You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A clear and concise description of what the bug is.
Trying to transcribe an audio file that is 1.5 hr long and 88MB in size. Successfully transcribed several shorter files. Initially, I got an "out of memory" error. Increased memory size to 1280 MB and that error stopped, but a new one started:
"An error occurred while performing a moderation check on the transcript: An error occurred while performing a moderation check on chunk 101. The content of this chunk is as follows: They couldn't take the stairs. So they had to jump 23 feet. And this is what they had to jump into. These rocks, a unlevel surface, but it was either this or burning alive. Error message: Detected inappropriate content in the transcript chunk. Summarization on this file cannot be completed. The content of this chunk is as follows: They couldn't take the stairs. So they had to jump 23 feet. And this is what they had to jump into. These rocks, a unlevel surface, but it was either this or burning alive. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT."
I have not changed the default settings, and when I enable advanced settings, I see moderation check is disabled.
Which cloud storage app are you using? (Google Drive, Dropbox, or OneDrive)
OneDrive
Explain
Longest period gap info: {
"longestGap": 394,
"longestGapText": " And if you guys have any questions regarding safety and kind of a lot of the ifs, ands, or buts of like what could happen in the worst situation with a guest with any kind of safety issue, Justin is your man to kind of go to for just the breadth of knowledge that he has with helping short-term rental owners as well as vacation businesses, you know, small vacation businesses or even big ones",
"maxTokens": 2750,
"encodedGapLength": 85
}
2/20/2024, 12:57:24 PM
Initiating moderation check on the transcript.
2/20/2024, 12:57:24 PM
Converting the transcript to paragraphs...
2/20/2024, 12:57:24 PM
Limiting paragraphs to 1800 characters...
2/20/2024, 12:57:24 PM
Transcript split into 332 chunks. Moderation check is most accurate on chunks of 2,000 characters or less. Moderation check will be performed on each chunk.
2/20/2024, 12:57:25 PM
Explain
Moderation check flagged innapropriate content in chunk 101.
The content of this chunk is as follows:
They couldn't take the stairs. So they had to jump 23 feet. And this is what they had to jump into. These rocks, a unlevel surface, but it was either this or burning alive.
Contents of moderation check:
2/20/2024, 12:57:25 PM
ACTIVE_HANDLE
This step was still trying to run code when the step ended. Make sure you promisify callback functions and await all Promises. (Reason: TLSSocket, Learn more: https://pipedream.com/docs/code/nodejs/async/)
The text was updated successfully, but these errors were encountered:
Unfortunately not. On Mar 7, 2024, at 9:05 AM, StewartMeusch ***@***.***> wrote:
Did this ever get resolved? Im having a similar issue
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
Describe the bug
A clear and concise description of what the bug is.
Trying to transcribe an audio file that is 1.5 hr long and 88MB in size. Successfully transcribed several shorter files. Initially, I got an "out of memory" error. Increased memory size to 1280 MB and that error stopped, but a new one started:
"An error occurred while performing a moderation check on the transcript: An error occurred while performing a moderation check on chunk 101. The content of this chunk is as follows: They couldn't take the stairs. So they had to jump 23 feet. And this is what they had to jump into. These rocks, a unlevel surface, but it was either this or burning alive. Error message: Detected inappropriate content in the transcript chunk. Summarization on this file cannot be completed. The content of this chunk is as follows: They couldn't take the stairs. So they had to jump 23 feet. And this is what they had to jump into. These rocks, a unlevel surface, but it was either this or burning alive. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT. Note that you can set Enable Advanced Settings to True, and then set Disable Moderation Check to True, to skip the moderation check. This will speed up the workflow run, but it will also increase the risk of inappropriate content being sent to ChatGPT."
I have not changed the default settings, and when I enable advanced settings, I see moderation check is disabled.
Which cloud storage app are you using? (Google Drive, Dropbox, or OneDrive)
OneDrive
Have you tried updating your workflow?
Please follow the steps here, and ensure you've tested the latest version of the workflow: https://thomasjfrank.com/how-to-transcribe-audio-to-text-with-chatgpt-and-notion/#update
Yes
Does the issue only happen while testing the workflow, or does it happen during normal, automated runs?
Normal, automated run
Please paste the contents of your Logs tab from the notion_voice_notes action step.
2/20/2024, 12:56:08 PM
Checking that file is under 300mb...
2/20/2024, 12:56:08 PM
File size is approximately 90.2mb.
2/20/2024, 12:56:08 PM
File is under the size limit. Continuing...
2/20/2024, 12:56:08 PM
Checking if the user set languages...
2/20/2024, 12:56:08 PM
User set transcript language to en.
2/20/2024, 12:56:08 PM
Successfully got duration: 5665 seconds
2/20/2024, 12:56:08 PM
Chunking file: /tmp/GMT20240215-175410_Recording.m4a
2/20/2024, 12:56:08 PM
Spliting file into chunks with ffmpeg command: /pipedream/dist/code/4cf355a52ab0f9c275ba953eea42492276c4b796f961fdffefa87942b1ced4df/node_modules/.pnpm/@[email protected]/node_modules/@ffmpeg-installer/linux-x64/ffmpeg -i "/tmp/GMT20240215-175410_Recording.m4a" -f segment -segment_time 1417 -c copy -loglevel verbose "/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-%03d.m4a"
2/20/2024, 12:56:10 PM
Explain
stderr: ffmpeg version N-47683-g0e8eb07980-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg
libavutil 56. 24.101 / 56. 24.101
libavcodec 58. 42.100 / 58. 42.100
libavformat 58. 24.100 / 58. 24.100
libavdevice 58. 6.101 / 58. 6.101
libavfilter 7. 46.101 / 7. 46.101
libswscale 5. 4.100 / 5. 4.100
libswresample 3. 4.100 / 3. 4.100
libpostproc 55. 4.100 / 55. 4.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/GMT20240215-175410_Recording.m4a':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2024-02-15T17:54:10.000000Z
Duration: 01:34:24.86, start: 0.000000, bitrate: 127 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 32000 Hz, mono, fltp, 126 kb/s (default)
Metadata:
creation_time : 2024-02-15T17:54:10.000000Z
handler_name : AAC audio
[segment @ 0x5a79d80] Selected stream id:0 type:audio
[segment @ 0x5a79d80] Opening '/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-000.m4a' for writing
Output #0, segment, to '/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-%03d.m4a':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Lavf58.24.100
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 32000 Hz, mono, fltp, 126 kb/s (default)
Metadata:
creation_time : 2024-02-15T17:54:10.000000Z
handler_name : AAC audio
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-000.m4a' starts with packet stream:0 pts:0 pts_time:0 frame:0
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-000.m4a' count:0 ended
[AVIOContext @ 0x5ac6300] Statistics: 2 seeks, 89 writeouts
[segment @ 0x5a79d80] Opening '/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-001.m4a' for writing
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-001.m4a' starts with packet stream:0 pts:45344768 pts_time:1417.02 frame:44282
size=N/A time=00:30:45.79 bitrate=N/A speed=3.69e+03x
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-001.m4a' count:1 ended
[AVIOContext @ 0x5ac5b80] Statistics: 2 seeks, 89 writeouts
[segment @ 0x5a79d80] Opening '/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-002.m4a' for writing
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-002.m4a' starts with packet stream:0 pts:90688512 pts_time:2834.02 frame:88563
size=N/A time=00:59:53.31 bitrate=N/A speed=3.59e+03x
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-002.m4a' count:2 ended
[AVIOContext @ 0x5ac5b80] Statistics: 2 seeks, 89 writeouts
[segment @ 0x5a79d80] Opening '/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-003.m4a' for writing
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-003.m4a' starts with packet stream:0 pts:136032256 pts_time:4251.01 frame:132844
size=N/A time=01:27:23.16 bitrate=N/A speed=3.5e+03x
No more output streams to write to, finishing.
[segment @ 0x5a79d80] segment:'/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-003.m4a' count:3 ended
[AVIOContext @ 0x5ac5b80] Statistics: 2 seeks, 89 writeouts
size=N/A time=01:34:24.83 bitrate=N/A speed=3.47e+03x
video:0kB audio:87303kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (/tmp/GMT20240215-175410_Recording.m4a):
Input stream #0:0 (audio): 177027 packets read (89398636 bytes);
Total: 177027 packets (89398636 bytes) demuxed
Output file #0 (/tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P/chunk-%03d.m4a):
Output stream #0:0 (audio): 177027 packets muxed (89398636 bytes);
Total: 177027 packets (89398636 bytes) muxed
[AVIOContext @ 0x5a7d9c0] Statistics: 90151747 bytes read, 0 seeks
2/20/2024, 12:56:10 PM
Chunks created successfully. Transcribing chunks: chunk-000.m4a,chunk-001.m4a,chunk-002.m4a,chunk-003.m4a
2/20/2024, 12:56:10 PM
Transcribing file: chunk-000.m4a
2/20/2024, 12:56:10 PM
Transcribing file: chunk-001.m4a
2/20/2024, 12:56:10 PM
Transcribing file: chunk-002.m4a
2/20/2024, 12:56:10 PM
Transcribing file: chunk-003.m4a
2/20/2024, 12:56:58 PM
Received response from OpenAI Whisper endpoint for chunk-000.m4a. Your API key's current Audio endpoing limits (learn more at https://platform.openai.com/docs/guides/rate-limits/overview):
2/20/2024, 12:56:58 PM
Explain
┌────────────────────────┬────────┐
│ (index) │ Values │
├────────────────────────┼────────┤
│ requestRate │ '50' │
│ tokenRate │ null │
│ remainingRequests │ '49' │
│ remainingTokens │ null │
│ rateResetTimeRemaining │ '1.2s' │
│ tokenRestTimeRemaining │ null │
└────────────────────────┴────────┘
2/20/2024, 12:57:16 PM
Received response from OpenAI Whisper endpoint for chunk-001.m4a. Your API key's current Audio endpoing limits (learn more at https://platform.openai.com/docs/guides/rate-limits/overview):
2/20/2024, 12:57:16 PM
Explain
┌────────────────────────┬──────────┐
│ (index) │ Values │
├────────────────────────┼──────────┤
│ requestRate │ '50' │
│ tokenRate │ null │
│ remainingRequests │ '47' │
│ remainingTokens │ null │
│ rateResetTimeRemaining │ '3.362s' │
│ tokenRestTimeRemaining │ null │
└────────────────────────┴──────────┘
2/20/2024, 12:57:18 PM
Received response from OpenAI Whisper endpoint for chunk-003.m4a. Your API key's current Audio endpoing limits (learn more at https://platform.openai.com/docs/guides/rate-limits/overview):
2/20/2024, 12:57:18 PM
Explain
┌────────────────────────┬──────────┐
│ (index) │ Values │
├────────────────────────┼──────────┤
│ requestRate │ '50' │
│ tokenRate │ null │
│ remainingRequests │ '46' │
│ remainingTokens │ null │
│ rateResetTimeRemaining │ '4.529s' │
│ tokenRestTimeRemaining │ null │
└────────────────────────┴──────────┘
2/20/2024, 12:57:23 PM
Received response from OpenAI Whisper endpoint for chunk-002.m4a. Your API key's current Audio endpoing limits (learn more at https://platform.openai.com/docs/guides/rate-limits/overview):
2/20/2024, 12:57:23 PM
Explain
┌────────────────────────┬──────────┐
│ (index) │ Values │
├────────────────────────┼──────────┤
│ requestRate │ '50' │
│ tokenRate │ null │
│ remainingRequests │ '48' │
│ remainingTokens │ null │
│ rateResetTimeRemaining │ '2.193s' │
│ tokenRestTimeRemaining │ null │
└────────────────────────┴──────────┘
2/20/2024, 12:57:23 PM
Explain
[
{
data: {
[deleted for Brevity to stay under max characters] },
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: Gunzip {
_writeState: Uint32Array(2) [ 16068, 0 ],
_events: {
close: undefined,
error: [ [Function (anonymous)], [Function (anonymous)] ],
prefinish: [Function: prefinish],
finish: undefined,
drain: undefined,
data: [Function (anonymous)],
end: [Function (anonymous)],
readable: undefined,
unpipe: undefined
},
_readableState: ReadableState {
highWaterMark: 16384,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 194512764
},
_writableState: WritableState {
highWaterMark: 16384,
length: 0,
corked: 0,
onwrite: [Function: bound onwrite],
writelen: 0,
bufferedIndex: 0,
pendingcb: 0,
[Symbol(kState)]: 1091466620,
[Symbol(kBufferedValue)]: null
},
allowHalfOpen: true,
_maxListeners: undefined,
_eventsCount: 4,
bytesWritten: 6446,
_handle: null,
_outBuffer: Buffer(16384) [Uint8Array] [
110, 103, 46, 32, 84, 104, 101, 114, 101, 32, 119, 101,
32, 103, 111, 46, 32, 89, 111, 117, 32, 99, 97, 110,
32, 115, 101, 101, 32, 116, 104, 101, 32, 114, 105, 103,
104, 116, 32, 115, 99, 114, 101, 101, 110, 32, 110, 111,
119, 46, 32, 73, 32, 115, 101, 101, 32, 121, 111, 117,
114, 32, 99, 97, 109, 101, 114, 97, 32, 114, 105, 103,
104, 116, 32, 110, 111, 119, 46, 32, 76, 101, 116, 32,
109, 101, 32, 115, 101, 101, 46, 32, 83, 99, 114, 101,
101, 110, 32, 115,
... 16284 more items
],
_outOffset: 316,
_chunkSize: 16384,
_defaultFlushFlag: 2,
_finishFlushFlag: 2,
_defaultFullFlushFlag: 3,
_info: undefined,
_maxOutputLength: 4294967296,
_level: -1,
_strategy: 0,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null,
[Symbol(kError)]: null
},
disturbed: true,
error: null
},
[Symbol(Response internals)]: {
url: 'https://api.openai.com/v1/audio/transcriptions',
status: 200,
statusText: 'OK',
headers: Headers {
[Symbol(map)]: [Object: null prototype] {
date: [ 'Tue, 20 Feb 2024 18:56:58 GMT' ],
'content-type': [ 'application/json' ],
'transfer-encoding': [ 'chunked' ],
connection: [ 'keep-alive' ],
'openai-organization': [ 'user-merv4yvk2srtv1segmjwawvm' ],
'openai-processing-ms': [ '45305' ],
'openai-version': [ '2020-10-01' ],
'strict-transport-security': [ 'max-age=15724800; includeSubDomains' ],
'x-ratelimit-limit-requests': [ '50' ],
'x-ratelimit-remaining-requests': [ '49' ],
'x-ratelimit-reset-requests': [ '1.2s' ],
'x-request-id': [ 'req_9a82c35d85bd05b2228b17b04a1cbf72' ],
'cf-cache-status': [ 'DYNAMIC' ],
'set-cookie': [
'__cf_bm=2O2uomVwurHZLJHkiWdR3GaJY3CcXSohvfBGmKcAAlQ-1708455418-1.0-AYYXVWaHWUcvbZP2SSLBepFys6u1gglZsSH11tQ70QtmYpByyhoxOLCJuMkYdrpwFYTFf9ybxSdQZocOjGJseIo=; path=/; expires=Tue, 20-Feb-24 19:26:58 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'_cfuvid=T4tEnxsHhupLyHotEEa52WlDgHGmZ1bE9pdj_b_CuN8-1708455418238-0.0-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'
],
server: [ 'cloudflare' ],
'cf-ray': [ '8588f7d19fb139a4-IAD' ],
'content-encoding': [ 'gzip' ],
'alt-svc': [ 'h3=":443"; ma=86400' ]
}
},
counter: 0
}
}
},
{
data: {
[deleted for Brevity to stay under max characters] },
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: Gunzip {
_writeState: Uint32Array(2) [ 10731, 0 ],
_events: {
close: undefined,
error: [ [Function (anonymous)], [Function (anonymous)] ],
prefinish: [Function: prefinish],
finish: undefined,
drain: undefined,
data: [Function (anonymous)],
end: [Function (anonymous)],
readable: undefined,
unpipe: undefined
},
_readableState: ReadableState {
highWaterMark: 16384,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 194512764
},
_writableState: WritableState {
highWaterMark: 16384,
length: 0,
corked: 0,
onwrite: [Function: bound onwrite],
writelen: 0,
bufferedIndex: 0,
pendingcb: 0,
[Symbol(kState)]: 1091466620,
[Symbol(kBufferedValue)]: null
},
allowHalfOpen: true,
_maxListeners: undefined,
_eventsCount: 4,
bytesWritten: 8634,
_handle: null,
_outBuffer: Buffer(16384) [Uint8Array] [
114, 107, 44, 32, 106, 117, 115, 116, 32, 108, 105, 107,
101, 32, 105, 110, 32, 84, 101, 120, 97, 115, 44, 32,
116, 104, 101, 114, 101, 32, 97, 114, 101, 32, 110, 111,
32, 112, 101, 110, 97, 108, 116, 105, 101, 115, 32, 105,
102, 32, 121, 111, 117, 32, 100, 111, 110, 39, 116, 32,
97, 100, 104, 101, 114, 101, 32, 116, 111, 32, 98, 117,
105, 108, 100, 105, 110, 103, 32, 99, 111, 100, 101, 115,
46, 32, 84, 104, 101, 32, 112, 114, 111, 115, 101, 99,
117, 116, 111, 114,
... 16284 more items
],
_outOffset: 5653,
_chunkSize: 16384,
_defaultFlushFlag: 2,
_finishFlushFlag: 2,
_defaultFullFlushFlag: 3,
_info: undefined,
_maxOutputLength: 4294967296,
_level: -1,
_strategy: 0,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null,
[Symbol(kError)]: null
},
disturbed: true,
error: null
},
[Symbol(Response internals)]: {
url: 'https://api.openai.com/v1/audio/transcriptions',
status: 200,
statusText: 'OK',
headers: Headers {
[Symbol(map)]: [Object: null prototype] {
date: [ 'Tue, 20 Feb 2024 18:57:15 GMT' ],
'content-type': [ 'application/json' ],
'transfer-encoding': [ 'chunked' ],
connection: [ 'keep-alive' ],
'openai-organization': [ 'user-merv4yvk2srtv1segmjwawvm' ],
'openai-processing-ms': [ '62485' ],
'openai-version': [ '2020-10-01' ],
'strict-transport-security': [ 'max-age=15724800; includeSubDomains' ],
'x-ratelimit-limit-requests': [ '50' ],
'x-ratelimit-remaining-requests': [ '47' ],
'x-ratelimit-reset-requests': [ '3.362s' ],
'x-request-id': [ 'req_c26139a913e0d37757b83af4774ecdb0' ],
'cf-cache-status': [ 'DYNAMIC' ],
'set-cookie': [
'__cf_bm=OMZ5oKsVtjl1unAxhSMD6PAui_D15Mg0SC6zNAWz884-1708455435-1.0-AdIfyWv1olqRXR6idmVQysyzM0vk9zUphv0PEpqmTMP6xIgUVFPrQQjmJfBMziCbOcq2qLLQSAenrzwD49oRgDk=; path=/; expires=Tue, 20-Feb-24 19:27:15 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'_cfuvid=k1EUEaqeIQO1hAVm207Q3pjITf8QyW1r9n_Vm3wDFiE-1708455435873-0.0-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'
],
server: [ 'cloudflare' ],
'cf-ray': [ '8588f7d2fee05b3a-IAD' ],
'content-encoding': [ 'gzip' ],
'alt-svc': [ 'h3=":443"; ma=86400' ]
}
},
counter: 0
}
}
},
{
data: {
[deleted for Brevity to stay under max characters] },
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: Gunzip {
_writeState: Uint32Array(2) [ 7550, 0 ],
_events: {
close: undefined,
error: [ [Function (anonymous)], [Function (anonymous)] ],
prefinish: [Function: prefinish],
finish: undefined,
drain: undefined,
data: [Function (anonymous)],
end: [Function (anonymous)],
readable: undefined,
unpipe: undefined
},
_readableState: ReadableState {
highWaterMark: 16384,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 194512764
},
_writableState: WritableState {
highWaterMark: 16384,
length: 0,
corked: 0,
onwrite: [Function: bound onwrite],
writelen: 0,
bufferedIndex: 0,
pendingcb: 0,
[Symbol(kState)]: 1091466620,
[Symbol(kBufferedValue)]: null
},
allowHalfOpen: true,
_maxListeners: undefined,
_eventsCount: 4,
bytesWritten: 9499,
_handle: null,
_outBuffer: Buffer(16384) [Uint8Array] [
111, 32, 109, 97, 107, 101, 32, 115, 117, 114, 101, 32,
116, 104, 97, 116, 32, 121, 111, 117, 39, 114, 101, 32,
97, 100, 100, 114, 101, 115, 115, 105, 110, 103, 32, 116,
104, 97, 116, 32, 98, 97, 115, 101, 46, 32, 84, 104,
111, 115, 101, 32, 115, 116, 97, 105, 114, 115, 32, 97,
114, 101, 32, 97, 108, 115, 111, 32, 112, 108, 97, 99,
101, 115, 32, 119, 104, 101, 114, 101, 32, 101, 118, 101,
114, 121, 111, 110, 101, 32, 108, 111, 118, 101, 115, 32,
116, 111, 32, 104,
... 16284 more items
],
_outOffset: 8834,
_chunkSize: 16384,
_defaultFlushFlag: 2,
_finishFlushFlag: 2,
_defaultFullFlushFlag: 3,
_info: undefined,
_maxOutputLength: 4294967296,
_level: -1,
_strategy: 0,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null,
[Symbol(kError)]: null
},
disturbed: true,
error: null
},
[Symbol(Response internals)]: {
url: 'https://api.openai.com/v1/audio/transcriptions',
status: 200,
statusText: 'OK',
headers: Headers {
[Symbol(map)]: [Object: null prototype] {
date: [ 'Tue, 20 Feb 2024 18:57:23 GMT' ],
'content-type': [ 'application/json' ],
'transfer-encoding': [ 'chunked' ],
connection: [ 'keep-alive' ],
'openai-organization': [ 'user-merv4yvk2srtv1segmjwawvm' ],
'openai-processing-ms': [ '70315' ],
'openai-version': [ '2020-10-01' ],
'strict-transport-security': [ 'max-age=15724800; includeSubDomains' ],
'x-ratelimit-limit-requests': [ '50' ],
'x-ratelimit-remaining-requests': [ '48' ],
'x-ratelimit-reset-requests': [ '2.193s' ],
'x-request-id': [ 'req_f39ca80131d59baff8f885ea4308b349' ],
'cf-cache-status': [ 'DYNAMIC' ],
'set-cookie': [
'__cf_bm=It7HiTq1npk0_z7PYPBE9faoXuVxNMu9RBeT05xMlGc-1708455443-1.0-AQFjXaG89p72MFrHTkotXyX0aLsTPeoDoCa7puFoFnGJDAbkSMbw9mnDzSDdo6JdNZKehHgNf6w11ZTWzEnxnvE=; path=/; expires=Tue, 20-Feb-24 19:27:23 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'_cfuvid=07PIHzFMBaICFSfHp25UlTDAqMmjnMckYOc3TXPG95w-1708455443381-0.0-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'
],
server: [ 'cloudflare' ],
'cf-ray': [ '8588f7d53ced0604-IAD' ],
'content-encoding': [ 'gzip' ],
'alt-svc': [ 'h3=":443"; ma=86400' ]
}
},
counter: 0
}
}
},
{
data: {
[deleted for Brevity to stay under max characters] },
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: Gunzip {
_writeState: Uint32Array(2) [ 9655, 0 ],
_events: {
close: undefined,
error: [ [Function (anonymous)], [Function (anonymous)] ],
prefinish: [Function: prefinish],
finish: undefined,
drain: undefined,
data: [Function (anonymous)],
end: [Function (anonymous)],
readable: undefined,
unpipe: undefined
},
_readableState: ReadableState {
highWaterMark: 16384,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 194512764
},
_writableState: WritableState {
highWaterMark: 16384,
length: 0,
corked: 0,
onwrite: [Function: bound onwrite],
writelen: 0,
bufferedIndex: 0,
pendingcb: 0,
[Symbol(kState)]: 1091466620,
[Symbol(kBufferedValue)]: null
},
allowHalfOpen: true,
_maxListeners: undefined,
_eventsCount: 4,
bytesWritten: 8902,
_handle: null,
_outBuffer: Buffer(16384) [Uint8Array] [
116, 105, 111, 110, 32, 114, 101, 110, 116, 97, 108, 115,
32, 97, 110, 121, 119, 104, 101, 114, 101, 32, 105, 110,
32, 116, 104, 101, 32, 119, 111, 114, 108, 100, 46, 32,
65, 110, 100, 32, 105, 116, 39, 115, 32, 98, 101, 99,
97, 117, 115, 101, 32, 116, 104, 101, 121, 32, 104, 97,
118, 101, 32, 115, 111, 32, 109, 97, 110, 121, 32, 112,
111, 111, 108, 115, 32, 97, 110, 100, 32, 119, 101, 39,
114, 101, 32, 104, 97, 118, 105, 110, 103, 32, 115, 111,
32, 109, 97, 110,
... 16284 more items
],
_outOffset: 6729,
_chunkSize: 16384,
_defaultFlushFlag: 2,
_finishFlushFlag: 2,
_defaultFullFlushFlag: 3,
_info: undefined,
_maxOutputLength: 4294967296,
_level: -1,
_strategy: 0,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null,
[Symbol(kError)]: null
},
disturbed: true,
error: null
},
[Symbol(Response internals)]: {
url: 'https://api.openai.com/v1/audio/transcriptions',
status: 200,
statusText: 'OK',
headers: Headers {
[Symbol(map)]: [Object: null prototype] {
date: [ 'Tue, 20 Feb 2024 18:57:18 GMT' ],
'content-type': [ 'application/json' ],
'transfer-encoding': [ 'chunked' ],
connection: [ 'keep-alive' ],
'openai-organization': [ 'user-merv4yvk2srtv1segmjwawvm' ],
'openai-processing-ms': [ '65122' ],
'openai-version': [ '2020-10-01' ],
'strict-transport-security': [ 'max-age=15724800; includeSubDomains' ],
'x-ratelimit-limit-requests': [ '50' ],
'x-ratelimit-remaining-requests': [ '46' ],
'x-ratelimit-reset-requests': [ '4.529s' ],
'x-request-id': [ 'req_5524c60a49e8dda8b6c2e770bb7cabd6' ],
'cf-cache-status': [ 'DYNAMIC' ],
'set-cookie': [
'__cf_bm=xx.m8mmzxACvZqVvMgvc.vd9I4LN9HSyrnwWePEs.Kg-1708455438-1.0-AaYgCBMM8MsEe2Ul8zfbatbaLj6PyK8PlKv83Whe9HTz0CKR66FynVS75Ng+vKGQ/1JYkgAaVsobfFnFUAqn+4c=; path=/; expires=Tue, 20-Feb-24 19:27:18 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'_cfuvid=JffnMRMurpoXdv6veJwwUokDxFI7e23g2B8xn9KDGDs-1708455438253-0.0-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'
],
server: [ 'cloudflare' ],
'cf-ray': [ '8588f7d62e9313b1-IAD' ],
'content-encoding': [ 'gzip' ],
'alt-svc': [ 'h3=":443"; ma=86400' ]
}
},
counter: 0
}
}
}
]
2/20/2024, 12:57:23 PM
Attempting to clean up the /tmp/ directory...
2/20/2024, 12:57:23 PM
Cleaning up /tmp/chunks-2cdxfYafJKvIpVES0o0WmzfYd0P...
2/20/2024, 12:57:24 PM
Using the gpt-3.5-turbo model.
2/20/2024, 12:57:24 PM
Max tokens per summary chunk: 2750
2/20/2024, 12:57:24 PM
Combining 4 transcript chunks into a single transcript...
2/20/2024, 12:57:24 PM
Transcript combined successfully.
2/20/2024, 12:57:24 PM
Explain
Longest period gap info: {
"longestGap": 394,
"longestGapText": " And if you guys have any questions regarding safety and kind of a lot of the ifs, ands, or buts of like what could happen in the worst situation with a guest with any kind of safety issue, Justin is your man to kind of go to for just the breadth of knowledge that he has with helping short-term rental owners as well as vacation businesses, you know, small vacation businesses or even big ones",
"maxTokens": 2750,
"encodedGapLength": 85
}
2/20/2024, 12:57:24 PM
Initiating moderation check on the transcript.
2/20/2024, 12:57:24 PM
Converting the transcript to paragraphs...
2/20/2024, 12:57:24 PM
Limiting paragraphs to 1800 characters...
2/20/2024, 12:57:24 PM
Transcript split into 332 chunks. Moderation check is most accurate on chunks of 2,000 characters or less. Moderation check will be performed on each chunk.
2/20/2024, 12:57:25 PM
Explain
Moderation check flagged innapropriate content in chunk 101.
2/20/2024, 12:57:25 PM
Explain
{
id: 'modr-8uPauAa79Kf80qJrCtRdcxbHf24X0',
model: 'text-moderation-007',
results: [
{
flagged: true,
categories: {
sexual: false,
hate: false,
harassment: false,
'self-harm': false,
'sexual/minors': false,
'hate/threatening': false,
'violence/graphic': false,
'self-harm/intent': false,
'self-harm/instructions': false,
'harassment/threatening': false,
violence: true
},
category_scores: {
sexual: 0.0011156804393976927,
hate: 0.0047517032362520695,
harassment: 0.10160402208566666,
'self-harm': 0.02487598918378353,
'sexual/minors': 4.141284364322928e-7,
'hate/threatening': 0.0004421440535224974,
'violence/graphic': 0.27663928270339966,
'self-harm/intent': 0.004668899346143007,
'self-harm/instructions': 0.0030220970511436462,
'harassment/threatening': 0.00631684809923172,
violence: 0.5989159941673279
}
}
]
}
2/20/2024, 12:57:25 PM
ACTIVE_HANDLE
This step was still trying to run code when the step ended. Make sure you promisify callback functions and await all Promises. (Reason: TLSSocket, Learn more: https://pipedream.com/docs/code/nodejs/async/)
The text was updated successfully, but these errors were encountered: