client.chat_stream(...)
-
-
-
Generates a text response to a user message. To learn how to use the Chat API and RAG follow our Text Generation guides.
-
-
-
from cohere import ( ChatbotMessage, ChatConnector, ChatStreamRequestConnectorsSearchOptions, Client, TextResponseFormat, Tool, ToolCall, ToolParameterDefinitionsValue, ToolResult, ) client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) response = client.chat_stream( message="string", model="string", preamble="string", chat_history=[ ChatbotMessage( message="string", tool_calls=[ ToolCall( name="string", parameters={"string": {"key": "value"}}, ) ], ) ], conversation_id="string", prompt_truncation="OFF", connectors=[ ChatConnector( id="string", user_access_token="string", continue_on_failure=True, options={"string": {"key": "value"}}, ) ], search_queries_only=True, documents=[{"string": {"key": "value"}}], citation_quality="fast", temperature=1.1, max_tokens=1, max_input_tokens=1, k=1, p=1.1, seed=1, stop_sequences=["string"], connectors_search_options=ChatStreamRequestConnectorsSearchOptions( seed=1, ), frequency_penalty=1.1, presence_penalty=1.1, raw_prompting=True, return_prompt=True, tools=[ Tool( name="string", description="string", parameter_definitions={ "string": ToolParameterDefinitionsValue( description="string", type="string", required=True, ) }, ) ], tool_results=[ ToolResult( call=ToolCall( name="string", parameters={"string": {"key": "value"}}, ), outputs=[{"string": {"key": "value"}}], ) ], force_single_step=True, response_format=TextResponseFormat(), safety_mode="CONTEXTUAL", ) for chunk in response: yield chunk
-
-
-
message:
str
Text input for the model to respond to.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
accepts:
typing.Optional[typing.Literal["text/event-stream"]]
— Pass text/event-stream to receive the streamed response as server-sent events. The default is\n
delimited events.
-
model:
typing.Optional[str]
Defaults to
command-r-plus-08-2024
.The name of a compatible Cohere model or the ID of a fine-tuned model.
Compatible Deployments: Cohere Platform, Private Deployments
-
preamble:
typing.Optional[str]
When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the
SYSTEM
role.The
SYSTEM
role is also used for the contents of the optionalchat_history=
parameter. When used with thechat_history=
parameter it adds content throughout a conversation. Conversely, when used with thepreamble=
parameter it adds content at the start of the conversation only.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
chat_history:
typing.Optional[typing.Sequence[Message]]
A list of previous messages between the user and the model, giving the model conversational context for responding to the user's
message
.Each item represents a single message in the chat history, excluding the current user turn. It has two properties:
role
andmessage
. Therole
identifies the sender (CHATBOT
,SYSTEM
, orUSER
), while themessage
contains the text content.The chat_history parameter should not be used for
SYSTEM
messages in most cases. Instead, to add aSYSTEM
role message at the beginning of a conversation, thepreamble
parameter should be used.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
conversation_id:
typing.Optional[str]
An alternative to
chat_history
.Providing a
conversation_id
creates or resumes a persisted conversation with the specified ID. The ID can be any non empty string.Compatible Deployments: Cohere Platform
-
prompt_truncation:
typing.Optional[ChatStreamRequestPromptTruncation]
Defaults to
AUTO
whenconnectors
are specified andOFF
in all other cases.Dictates how the prompt will be constructed.
With
prompt_truncation
set to "AUTO", some elements fromchat_history
anddocuments
will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.With
prompt_truncation
set to "AUTO_PRESERVE_ORDER", some elements fromchat_history
anddocuments
will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.With
prompt_truncation
set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, aTooManyTokens
error will be returned.Compatible Deployments:
- AUTO: Cohere Platform Only
- AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker/Bedrock, Private Deployments
-
connectors:
typing.Optional[typing.Sequence[ChatConnector]]
Accepts
{"id": "web-search"}
, and/or the"id"
for a custom connector, if you've created one.When specified, the model's reply will be enriched with information found by querying each of the connectors (RAG).
Compatible Deployments: Cohere Platform
-
search_queries_only:
typing.Optional[bool]
Defaults to
false
.When
true
, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user'smessage
will be generated.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
documents:
typing.Optional[typing.Sequence[ChatDocument]]
A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
Example:
[ { "title": "Tall penguins", "text": "Emperor penguins are the tallest." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica." }, ]
Keys and values from each document will be serialized to a string and passed to the model. The resulting generation will include citations that reference some of these documents.
Some suggested keys are "text", "author", and "date". For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
An
id
field (string) can be optionally supplied to identify the document in the citations. This field will not be passed to the model.An
_excludes
field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields will still show up in the citation object. The "_excludes" field will not be passed to the model.See 'Document Mode' in the guide for more information.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
citation_quality:
typing.Optional[ChatStreamRequestCitationQuality]
Defaults to
"accurate"
.Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want
"accurate"
results,"fast"
results or no results.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
temperature:
typing.Optional[float]
Defaults to
0.3
.A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
Randomness can be further maximized by increasing the value of the
p
parameter.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
max_input_tokens:
typing.Optional[int]
The maximum number of input tokens to send to the model. If not specified,
max_input_tokens
is the model's context length limit minus a small buffer.Input will be truncated according to the
prompt_truncation
parameter.Compatible Deployments: Cohere Platform
-
k:
typing.Optional[int]
Ensures only the top
k
most likely tokens are considered for generation at each step. Defaults to0
, min value of0
, max value of500
.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
frequency_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Used to reduce repetitiveness of generated tokens. Similar to
frequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
raw_prompting:
typing.Optional[bool]
When enabled, the user's prompt will be sent to the model without any pre-processing.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
return_prompt:
typing.Optional[bool]
— The prompt is returned in theprompt
response field when this is enabled.
-
tools:
typing.Optional[typing.Sequence[Tool]]
A list of available tools (functions) that the model may suggest invoking before producing a text response.
When
tools
is passed (withouttool_results
), thetext
field in the response will be""
and thetool_calls
field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, thetool_calls
array will be empty.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
tool_results:
typing.Optional[typing.Sequence[ToolResult]]
A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using
tool_results
,tools
must be passed as well. Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.Note:
outputs
must be a list of objects. If your tool returns a single object (eg{"status": 200}
), make sure to wrap it in a list.tool_results = [ { "call": { "name": <tool name>, "parameters": { <param name>: <param value> } }, "outputs": [{ <key>: <value> }] }, ... ]
Note: Chat calls with
tool_results
should not be included in the Chat history to avoid duplication of the message text.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
force_single_step:
typing.Optional[bool]
— Forces the chat to be single step. Defaults tofalse
.
-
response_format:
typing.Optional[ResponseFormat]
-
safety_mode:
typing.Optional[ChatStreamRequestSafetyMode]
Used to select the safety instruction inserted into the prompt. Defaults to
CONTEXTUAL
. WhenNONE
is specified, the safety instruction will be omitted.Safety modes are not yet configurable in combination with
tools
,tool_results
anddocuments
parameters.Note: This parameter is only compatible with models Command R 08-2024, Command R+ 08-2024 and newer.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.chat(...)
-
-
-
Generates a text response to a user message. To learn how to use the Chat API and RAG follow our Text Generation guides.
-
-
-
from cohere import Client, ToolMessage client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.chat( message="Can you give me a global market overview of solar panels?", chat_history=[ToolMessage(), ToolMessage()], prompt_truncation="OFF", temperature=0.3, )
-
-
-
message:
str
Text input for the model to respond to.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
accepts:
typing.Optional[typing.Literal["text/event-stream"]]
— Pass text/event-stream to receive the streamed response as server-sent events. The default is\n
delimited events.
-
model:
typing.Optional[str]
Defaults to
command-r-plus-08-2024
.The name of a compatible Cohere model or the ID of a fine-tuned model.
Compatible Deployments: Cohere Platform, Private Deployments
-
preamble:
typing.Optional[str]
When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the
SYSTEM
role.The
SYSTEM
role is also used for the contents of the optionalchat_history=
parameter. When used with thechat_history=
parameter it adds content throughout a conversation. Conversely, when used with thepreamble=
parameter it adds content at the start of the conversation only.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
chat_history:
typing.Optional[typing.Sequence[Message]]
A list of previous messages between the user and the model, giving the model conversational context for responding to the user's
message
.Each item represents a single message in the chat history, excluding the current user turn. It has two properties:
role
andmessage
. Therole
identifies the sender (CHATBOT
,SYSTEM
, orUSER
), while themessage
contains the text content.The chat_history parameter should not be used for
SYSTEM
messages in most cases. Instead, to add aSYSTEM
role message at the beginning of a conversation, thepreamble
parameter should be used.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
conversation_id:
typing.Optional[str]
An alternative to
chat_history
.Providing a
conversation_id
creates or resumes a persisted conversation with the specified ID. The ID can be any non empty string.Compatible Deployments: Cohere Platform
-
prompt_truncation:
typing.Optional[ChatRequestPromptTruncation]
Defaults to
AUTO
whenconnectors
are specified andOFF
in all other cases.Dictates how the prompt will be constructed.
With
prompt_truncation
set to "AUTO", some elements fromchat_history
anddocuments
will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.With
prompt_truncation
set to "AUTO_PRESERVE_ORDER", some elements fromchat_history
anddocuments
will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.With
prompt_truncation
set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, aTooManyTokens
error will be returned.Compatible Deployments:
- AUTO: Cohere Platform Only
- AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker/Bedrock, Private Deployments
-
connectors:
typing.Optional[typing.Sequence[ChatConnector]]
Accepts
{"id": "web-search"}
, and/or the"id"
for a custom connector, if you've created one.When specified, the model's reply will be enriched with information found by querying each of the connectors (RAG).
Compatible Deployments: Cohere Platform
-
search_queries_only:
typing.Optional[bool]
Defaults to
false
.When
true
, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user'smessage
will be generated.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
documents:
typing.Optional[typing.Sequence[ChatDocument]]
A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
Example:
[ { "title": "Tall penguins", "text": "Emperor penguins are the tallest." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica." }, ]
Keys and values from each document will be serialized to a string and passed to the model. The resulting generation will include citations that reference some of these documents.
Some suggested keys are "text", "author", and "date". For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
An
id
field (string) can be optionally supplied to identify the document in the citations. This field will not be passed to the model.An
_excludes
field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields will still show up in the citation object. The "_excludes" field will not be passed to the model.See 'Document Mode' in the guide for more information.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
citation_quality:
typing.Optional[ChatRequestCitationQuality]
Defaults to
"accurate"
.Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want
"accurate"
results,"fast"
results or no results.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
temperature:
typing.Optional[float]
Defaults to
0.3
.A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
Randomness can be further maximized by increasing the value of the
p
parameter.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
max_input_tokens:
typing.Optional[int]
The maximum number of input tokens to send to the model. If not specified,
max_input_tokens
is the model's context length limit minus a small buffer.Input will be truncated according to the
prompt_truncation
parameter.Compatible Deployments: Cohere Platform
-
k:
typing.Optional[int]
Ensures only the top
k
most likely tokens are considered for generation at each step. Defaults to0
, min value of0
, max value of500
.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
frequency_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Used to reduce repetitiveness of generated tokens. Similar to
frequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
raw_prompting:
typing.Optional[bool]
When enabled, the user's prompt will be sent to the model without any pre-processing.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
return_prompt:
typing.Optional[bool]
— The prompt is returned in theprompt
response field when this is enabled.
-
tools:
typing.Optional[typing.Sequence[Tool]]
A list of available tools (functions) that the model may suggest invoking before producing a text response.
When
tools
is passed (withouttool_results
), thetext
field in the response will be""
and thetool_calls
field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, thetool_calls
array will be empty.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
tool_results:
typing.Optional[typing.Sequence[ToolResult]]
A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using
tool_results
,tools
must be passed as well. Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.Note:
outputs
must be a list of objects. If your tool returns a single object (eg{"status": 200}
), make sure to wrap it in a list.tool_results = [ { "call": { "name": <tool name>, "parameters": { <param name>: <param value> } }, "outputs": [{ <key>: <value> }] }, ... ]
Note: Chat calls with
tool_results
should not be included in the Chat history to avoid duplication of the message text.Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
force_single_step:
typing.Optional[bool]
— Forces the chat to be single step. Defaults tofalse
.
-
response_format:
typing.Optional[ResponseFormat]
-
safety_mode:
typing.Optional[ChatRequestSafetyMode]
Used to select the safety instruction inserted into the prompt. Defaults to
CONTEXTUAL
. WhenNONE
is specified, the safety instruction will be omitted.Safety modes are not yet configurable in combination with
tools
,tool_results
anddocuments
parameters.Note: This parameter is only compatible with models Command R 08-2024, Command R+ 08-2024 and newer.
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.generate_stream(...)
-
-
- This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](https://docs.cohere.com/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API. Generates realistic text conditioned on a given input.
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) response = client.generate_stream( prompt="string", model="string", num_generations=1, max_tokens=1, truncate="NONE", temperature=1.1, seed=1, preset="string", end_sequences=["string"], stop_sequences=["string"], k=1, p=1.1, frequency_penalty=1.1, presence_penalty=1.1, return_likelihoods="GENERATION", raw_prompting=True, ) for chunk in response: yield chunk
-
-
-
prompt:
str
The input text that serves as the starting point for generating the response. Note: The prompt will be pre-processed and modified before reaching the model.
-
model:
typing.Optional[str]
The identifier of the model to generate with. Currently available models are
command
(default),command-nightly
(experimental),command-light
, andcommand-light-nightly
(experimental). Smaller, "light" models are faster, while larger models will perform better. Custom models can also be supplied with their full ID.
-
num_generations:
typing.Optional[int]
— The maximum number of generations that will be returned. Defaults to1
, min value of1
, max value of5
.
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token. See BPE Tokens for more details.
Can only be set to
0
ifreturn_likelihoods
is set toALL
to get the likelihood of the prompt.
-
truncate:
typing.Optional[GenerateStreamRequestTruncate]
One of
NONE|START|END
to specify how the API will handle inputs longer than the maximum token length.Passing
START
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.If
NONE
is selected, when the input exceeds the maximum input token length an error will be returned.
-
temperature:
typing.Optional[float]
A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See Temperature for more details. Defaults to
0.75
, min value of0.0
, max value of5.0
.
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
preset:
typing.Optional[str]
Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the playground. When a preset is specified, the
prompt
parameter becomes optional, and any included parameters will override the preset's parameters.
-
end_sequences:
typing.Optional[typing.Sequence[str]]
— The generated text will be cut at the beginning of the earliest occurrence of an end sequence. The sequence will be excluded from the text.
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
— The generated text will be cut at the end of the earliest occurrence of a stop sequence. The sequence will be included the text.
-
k:
typing.Optional[int]
Ensures only the top
k
most likely tokens are considered for generation at each step. Defaults to0
, min value of0
, max value of500
.
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.
-
frequency_penalty:
typing.Optional[float]
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Using
frequency_penalty
in combination withpresence_penalty
is not supported on newer models.
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Can be used to reduce repetitiveness of generated tokens. Similar to
frequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.Using
frequency_penalty
in combination withpresence_penalty
is not supported on newer models.
-
return_likelihoods:
typing.Optional[GenerateStreamRequestReturnLikelihoods]
One of
GENERATION|ALL|NONE
to specify how and if the token likelihoods are returned with the response. Defaults toNONE
.If
GENERATION
is selected, the token likelihoods will only be provided for generated text.If
ALL
is selected, the token likelihoods will be provided both for the prompt and the generated text.
-
raw_prompting:
typing.Optional[bool]
— When enabled, the user's prompt will be sent to the model without any pre-processing.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.generate(...)
-
-
- This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](https://docs.cohere.com/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API. Generates realistic text conditioned on a given input.
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.generate( prompt="Please explain to me how LLMs work", )
-
-
-
prompt:
str
The input text that serves as the starting point for generating the response. Note: The prompt will be pre-processed and modified before reaching the model.
-
model:
typing.Optional[str]
The identifier of the model to generate with. Currently available models are
command
(default),command-nightly
(experimental),command-light
, andcommand-light-nightly
(experimental). Smaller, "light" models are faster, while larger models will perform better. Custom models can also be supplied with their full ID.
-
num_generations:
typing.Optional[int]
— The maximum number of generations that will be returned. Defaults to1
, min value of1
, max value of5
.
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token. See BPE Tokens for more details.
Can only be set to
0
ifreturn_likelihoods
is set toALL
to get the likelihood of the prompt.
-
truncate:
typing.Optional[GenerateRequestTruncate]
One of
NONE|START|END
to specify how the API will handle inputs longer than the maximum token length.Passing
START
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.If
NONE
is selected, when the input exceeds the maximum input token length an error will be returned.
-
temperature:
typing.Optional[float]
A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See Temperature for more details. Defaults to
0.75
, min value of0.0
, max value of5.0
.
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
-
preset:
typing.Optional[str]
Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the playground. When a preset is specified, the
prompt
parameter becomes optional, and any included parameters will override the preset's parameters.
-
end_sequences:
typing.Optional[typing.Sequence[str]]
— The generated text will be cut at the beginning of the earliest occurrence of an end sequence. The sequence will be excluded from the text.
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
— The generated text will be cut at the end of the earliest occurrence of a stop sequence. The sequence will be included the text.
-
k:
typing.Optional[int]
Ensures only the top
k
most likely tokens are considered for generation at each step. Defaults to0
, min value of0
, max value of500
.
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.
-
frequency_penalty:
typing.Optional[float]
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Using
frequency_penalty
in combination withpresence_penalty
is not supported on newer models.
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
.Can be used to reduce repetitiveness of generated tokens. Similar to
frequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.Using
frequency_penalty
in combination withpresence_penalty
is not supported on newer models.
-
return_likelihoods:
typing.Optional[GenerateRequestReturnLikelihoods]
One of
GENERATION|ALL|NONE
to specify how and if the token likelihoods are returned with the response. Defaults toNONE
.If
GENERATION
is selected, the token likelihoods will only be provided for generated text.If
ALL
is selected, the token likelihoods will be provided both for the prompt and the generated text.
-
raw_prompting:
typing.Optional[bool]
— When enabled, the user's prompt will be sent to the model without any pre-processing.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.embed(...)
-
-
-
This endpoint returns text and image embeddings. An embedding is a list of floating point numbers that captures semantic information about the content that it represents.
Embeddings can be used to create classifiers as well as empower semantic search. To learn more about embeddings, see the embedding page.
If you want to learn more how to use the embedding model, have a look at the Semantic Search Guide.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.embed()
-
-
-
texts:
typing.Optional[typing.Sequence[str]]
— An array of strings for the model to embed. Maximum number of texts per call is96
. We recommend reducing the length of each text to be under512
tokens for optimal quality.
-
images:
typing.Optional[typing.Sequence[str]]
An array of image data URIs for the model to embed. Maximum number of images per call is
1
.The image must be a valid data URI. The image must be in either
image/jpeg
orimage/png
format and has a maximum size of 5MB.
-
model:
typing.Optional[str]
Defaults to embed-english-v2.0
The identifier of the model. Smaller "light" models are faster, while larger models will perform better. Custom models can also be supplied with their full ID.
Available models and corresponding embedding dimensions:
-
embed-english-v3.0
1024 -
embed-multilingual-v3.0
1024 -
embed-english-light-v3.0
384 -
embed-multilingual-light-v3.0
384 -
embed-english-v2.0
4096 -
embed-english-light-v2.0
1024 -
embed-multilingual-v2.0
768
-
-
input_type:
typing.Optional[EmbedInputType]
-
embedding_types:
typing.Optional[typing.Sequence[EmbeddingType]]
Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
"float"
: Use this when you want to get back the default float embeddings. Valid for all models."int8"
: Use this when you want to get back signed int8 embeddings. Valid for only v3 models."uint8"
: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models."binary"
: Use this when you want to get back signed binary embeddings. Valid for only v3 models."ubinary"
: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
-
truncate:
typing.Optional[EmbedRequestTruncate]
One of
NONE|START|END
to specify how the API will handle inputs longer than the maximum token length.Passing
START
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.If
NONE
is selected, when the input exceeds the maximum input token length an error will be returned.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.rerank(...)
-
-
-
This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.rerank( query="query", documents=["documents"], )
-
-
-
query:
str
— The search query
-
documents:
typing.Sequence[RerankRequestDocumentsItem]
A list of document objects or strings to rerank. If a document is provided the text fields is required and all other fields will be preserved in the response.
The total max chunks (length of documents * max_chunks_per_doc) must be less than 10000.
We recommend a maximum of 1,000 documents for optimal endpoint performance.
-
model:
typing.Optional[str]
— The identifier of the model to use, one of :rerank-english-v3.0
,rerank-multilingual-v3.0
,rerank-english-v2.0
,rerank-multilingual-v2.0
-
top_n:
typing.Optional[int]
— The number of most relevant documents or indices to return, defaults to the length of the documents
-
rank_fields:
typing.Optional[typing.Sequence[str]]
— If a JSON object is provided, you can specify which keys you would like to have considered for reranking. The model will rerank based on order of the fields passed in (i.e. rank_fields=['title','author','text'] will rerank using the values in title, author, text sequentially. If the length of title, author, and text exceeds the context length of the model, the chunking will not re-consider earlier fields). If not provided, the model will use the default text field for ranking.
-
return_documents:
typing.Optional[bool]
- If false, returns results without the doc text - the api will return a list of {index, relevance score} where index is inferred from the list passed into the request.
- If true, returns results with the doc text passed in - the api will return an ordered list of {index, text, relevance score} where index + text refers to the list passed into the request.
-
max_chunks_per_doc:
typing.Optional[int]
— The maximum number of chunks to produce internally from a document
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.classify(...)
-
-
-
This endpoint makes a prediction about which label fits the specified text inputs best. To make a prediction, Classify uses the provided
examples
of text + label pairs as a reference. Note: Fine-tuned models trained on classification examples don't require theexamples
parameter to be passed in explicitly.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.classify( inputs=["inputs"], )
-
-
-
inputs:
typing.Sequence[str]
A list of up to 96 texts to be classified. Each one must be a non-empty string. There is, however, no consistent, universal limit to the length a particular input can be. We perform classification on the first
x
tokens of each input, andx
varies depending on which underlying model is powering classification. The maximum token length for each model is listed in the "max tokens" column here. Note: by default thetruncate
parameter is set toEND
, so tokens exceeding the limit will be automatically dropped. This behavior can be disabled by settingtruncate
toNONE
, which will result in validation errors for longer texts.
-
examples:
typing.Optional[typing.Sequence[ClassifyExample]]
An array of examples to provide context to the model. Each example is a text string and its associated label/class. Each unique label requires at least 2 examples associated with it; the maximum number of examples is 2500, and each example has a maximum length of 512 tokens. The values should be structured as
{text: "...",label: "..."}
. Note: Fine-tuned Models trained on classification examples don't require theexamples
parameter to be passed in explicitly.
-
model:
typing.Optional[str]
— The identifier of the model. Currently available models areembed-multilingual-v2.0
,embed-english-light-v2.0
, andembed-english-v2.0
(default). Smaller "light" models are faster, while larger models will perform better. Fine-tuned models can also be supplied with their full ID.
-
preset:
typing.Optional[str]
— The ID of a custom playground preset. You can create presets in the playground. If you use a preset, all other parameters become optional, and any included parameters will override the preset's parameters.
-
truncate:
typing.Optional[ClassifyRequestTruncate]
One of
NONE|START|END
to specify how the API will handle inputs longer than the maximum token length. PassingSTART
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model. IfNONE
is selected, when the input exceeds the maximum input token length an error will be returned.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.summarize(...)
-
-
- This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](https://docs.cohere.com/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API. Generates a summary in English for a given text.
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.summarize( text="text", )
-
-
-
text:
str
— The text to generate a summary for. Can be up to 100,000 characters long. Currently the only supported language is English.
-
length:
typing.Optional[SummarizeRequestLength]
— One ofshort
,medium
,long
, orauto
defaults toauto
. Indicates the approximate length of the summary. Ifauto
is selected, the best option will be picked based on the input text.
-
format:
typing.Optional[SummarizeRequestFormat]
— One ofparagraph
,bullets
, orauto
, defaults toauto
. Indicates the style in which the summary will be delivered - in a free form paragraph or in bullet points. Ifauto
is selected, the best option will be picked based on the input text.
-
model:
typing.Optional[str]
— The identifier of the model to generate the summary with. Currently available models arecommand
(default),command-nightly
(experimental),command-light
, andcommand-light-nightly
(experimental). Smaller, "light" models are faster, while larger models will perform better.
-
extractiveness:
typing.Optional[SummarizeRequestExtractiveness]
— One oflow
,medium
,high
, orauto
, defaults toauto
. Controls how close to the original text the summary is.high
extractiveness summaries will lean towards reusing sentences verbatim, whilelow
extractiveness summaries will tend to paraphrase more. Ifauto
is selected, the best option will be picked based on the input text.
-
temperature:
typing.Optional[float]
— Ranges from 0 to 5. Controls the randomness of the output. Lower values tend to generate more “predictable” output, while higher values tend to generate more “creative” output. The sweet spot is typically between 0 and 1.
-
additional_command:
typing.Optional[str]
— A free-form instruction for modifying how the summaries get generated. Should complete the sentence "Generate a summary _". Eg. "focusing on the next steps" or "written by Yoda"
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.tokenize(...)
-
-
-
This endpoint splits input text into smaller units called tokens using byte-pair encoding (BPE). To learn more about tokenization and byte pair encoding, see the tokens page.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.tokenize( text="tokenize me! :D", model="command", )
-
-
-
text:
str
— The string to be tokenized, the minimum text length is 1 character, and the maximum text length is 65536 characters.
-
model:
str
— An optional parameter to provide the model name. This will ensure that the tokenization uses the tokenizer used by that model.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.detokenize(...)
-
-
-
This endpoint takes tokens using byte-pair encoding and returns their text representation. To learn more about tokenization and byte pair encoding, see the tokens page.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.detokenize( tokens=[1], model="model", )
-
-
-
tokens:
typing.Sequence[int]
— The list of tokens to be detokenized.
-
model:
str
— An optional parameter to provide the model name. This will ensure that the detokenization is done by the tokenizer used by that model.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.check_api_key()
-
-
-
Checks that the api key in the Authorization header is valid and active
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.check_api_key()
-
-
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.v2.chat_stream(...)
-
-
-
Generates a text response to a user message and streams it down, token by token. To learn how to use the Chat API with streaming follow our Text Generation guides.
Follow the Migration Guide for instructions on moving from API v1 to API v2.
-
-
-
from cohere import ( CitationOptions, Client, TextResponseFormatV2, ToolV2, ToolV2Function, UserChatMessageV2, ) client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) response = client.v2.chat_stream( model="string", messages=[ UserChatMessageV2( content="string", ) ], tools=[ ToolV2( function=ToolV2Function( name="string", description="string", parameters={"string": {"key": "value"}}, ), ) ], documents=["string"], citation_options=CitationOptions( mode="FAST", ), response_format=TextResponseFormatV2(), safety_mode="CONTEXTUAL", max_tokens=1, stop_sequences=["string"], temperature=1.1, seed=1, frequency_penalty=1.1, presence_penalty=1.1, k=1.1, p=1.1, return_prompt=True, logprobs=True, ) for chunk in response: yield chunk
-
-
-
model:
str
— The name of a compatible Cohere model (such as command-r or command-r-plus) or the ID of a fine-tuned model.
-
messages:
ChatMessages
-
tools:
typing.Optional[typing.Sequence[ToolV2]]
A list of available tools (functions) that the model may suggest invoking before producing a text response.
When
tools
is passed (withouttool_results
), thetext
content in the response will be empty and thetool_calls
field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, thetool_calls
array will be empty.
-
documents:
typing.Optional[typing.Sequence[V2ChatStreamRequestDocumentsItem]]
— A list of relevant documents that the model can cite to generate a more accurate reply. Each document is either a string or document object with content and metadata.
-
citation_options:
typing.Optional[CitationOptions]
-
response_format:
typing.Optional[ResponseFormatV2]
-
safety_mode:
typing.Optional[V2ChatStreamRequestSafetyMode]
Used to select the safety instruction inserted into the prompt. Defaults to
CONTEXTUAL
. WhenOFF
is specified, the safety instruction will be omitted.Safety modes are not yet configurable in combination with
tools
,tool_results
anddocuments
parameters.Note: This parameter is only compatible with models Command R 08-2024, Command R+ 08-2024 and newer.
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response.
Note: Setting a low value may result in incomplete generations.
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
— A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
-
temperature:
typing.Optional[float]
Defaults to
0.3
.A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
Randomness can be further maximized by increasing the value of the
p
parameter.
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
-
frequency_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
. Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
. Used to reduce repetitiveness of generated tokens. Similar tofrequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
-
k:
typing.Optional[float]
Ensures that only the top
k
most likely tokens are considered for generation at each step. Whenk
is set to0
, k-sampling is disabled. Defaults to0
, min value of0
, max value of500
.
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.
-
return_prompt:
typing.Optional[bool]
— Whether to return the prompt in the response.
-
logprobs:
typing.Optional[bool]
— Whether to return the log probabilities of the generated tokens. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.v2.chat(...)
-
-
-
Generates a text response to a user message and streams it down, token by token. To learn how to use the Chat API with streaming follow our Text Generation guides.
Follow the Migration Guide for instructions on moving from API v1 to API v2.
-
-
-
from cohere import Client, ToolChatMessageV2 client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.v2.chat( model="model", messages=[ ToolChatMessageV2( tool_call_id="messages", content="messages", ) ], )
-
-
-
model:
str
— The name of a compatible Cohere model (such as command-r or command-r-plus) or the ID of a fine-tuned model.
-
messages:
ChatMessages
-
tools:
typing.Optional[typing.Sequence[ToolV2]]
A list of available tools (functions) that the model may suggest invoking before producing a text response.
When
tools
is passed (withouttool_results
), thetext
content in the response will be empty and thetool_calls
field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, thetool_calls
array will be empty.
-
documents:
typing.Optional[typing.Sequence[V2ChatRequestDocumentsItem]]
— A list of relevant documents that the model can cite to generate a more accurate reply. Each document is either a string or document object with content and metadata.
-
citation_options:
typing.Optional[CitationOptions]
-
response_format:
typing.Optional[ResponseFormatV2]
-
safety_mode:
typing.Optional[V2ChatRequestSafetyMode]
Used to select the safety instruction inserted into the prompt. Defaults to
CONTEXTUAL
. WhenOFF
is specified, the safety instruction will be omitted.Safety modes are not yet configurable in combination with
tools
,tool_results
anddocuments
parameters.Note: This parameter is only compatible with models Command R 08-2024, Command R+ 08-2024 and newer.
-
max_tokens:
typing.Optional[int]
The maximum number of tokens the model will generate as part of the response.
Note: Setting a low value may result in incomplete generations.
-
stop_sequences:
typing.Optional[typing.Sequence[str]]
— A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
-
temperature:
typing.Optional[float]
Defaults to
0.3
.A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
Randomness can be further maximized by increasing the value of the
p
parameter.
-
seed:
typing.Optional[int]
If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
-
frequency_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
. Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
-
presence_penalty:
typing.Optional[float]
Defaults to
0.0
, min value of0.0
, max value of1.0
. Used to reduce repetitiveness of generated tokens. Similar tofrequency_penalty
, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
-
k:
typing.Optional[float]
Ensures that only the top
k
most likely tokens are considered for generation at each step. Whenk
is set to0
, k-sampling is disabled. Defaults to0
, min value of0
, max value of500
.
-
p:
typing.Optional[float]
Ensures that only the most likely tokens, with total probability mass of
p
, are considered for generation at each step. If bothk
andp
are enabled,p
acts afterk
. Defaults to0.75
. min value of0.01
, max value of0.99
.
-
return_prompt:
typing.Optional[bool]
— Whether to return the prompt in the response.
-
logprobs:
typing.Optional[bool]
— Whether to return the log probabilities of the generated tokens. Defaults to false.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.v2.embed(...)
-
-
-
This endpoint returns text embeddings. An embedding is a list of floating point numbers that captures semantic information about the text that it represents.
Embeddings can be used to create text classifiers as well as empower semantic search. To learn more about embeddings, see the embedding page.
If you want to learn more how to use the embedding model, have a look at the Semantic Search Guide.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.v2.embed( model="model", input_type="search_document", embedding_types=["float"], )
-
-
-
model:
str
Defaults to embed-english-v2.0
The identifier of the model. Smaller "light" models are faster, while larger models will perform better. Custom models can also be supplied with their full ID.
Available models and corresponding embedding dimensions:
-
embed-english-v3.0
1024 -
embed-multilingual-v3.0
1024 -
embed-english-light-v3.0
384 -
embed-multilingual-light-v3.0
384 -
embed-english-v2.0
4096 -
embed-english-light-v2.0
1024 -
embed-multilingual-v2.0
768
-
-
input_type:
EmbedInputType
-
embedding_types:
typing.Sequence[EmbeddingType]
Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
"float"
: Use this when you want to get back the default float embeddings. Valid for all models."int8"
: Use this when you want to get back signed int8 embeddings. Valid for only v3 models."uint8"
: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models."binary"
: Use this when you want to get back signed binary embeddings. Valid for only v3 models."ubinary"
: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
-
texts:
typing.Optional[typing.Sequence[str]]
— An array of strings for the model to embed. Maximum number of texts per call is96
. We recommend reducing the length of each text to be under512
tokens for optimal quality.
-
images:
typing.Optional[typing.Sequence[str]]
An array of image data URIs for the model to embed. Maximum number of images per call is
1
.The image must be a valid data URI. The image must be in either
image/jpeg
orimage/png
format and has a maximum size of 5MB.
-
truncate:
typing.Optional[V2EmbedRequestTruncate]
One of
NONE|START|END
to specify how the API will handle inputs longer than the maximum token length.Passing
START
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.If
NONE
is selected, when the input exceeds the maximum input token length an error will be returned.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.v2.rerank(...)
-
-
-
This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.v2.rerank( model="model", query="query", documents=["documents"], )
-
-
-
model:
str
— The identifier of the model to use, one of :rerank-english-v3.0
,rerank-multilingual-v3.0
,rerank-english-v2.0
,rerank-multilingual-v2.0
-
query:
str
— The search query
-
documents:
typing.Sequence[V2RerankRequestDocumentsItem]
A list of document objects or strings to rerank. If a document is provided the text fields is required and all other fields will be preserved in the response.
The total max chunks (length of documents * max_chunks_per_doc) must be less than 10000.
We recommend a maximum of 1,000 documents for optimal endpoint performance.
-
top_n:
typing.Optional[int]
— The number of most relevant documents or indices to return, defaults to the length of the documents
-
rank_fields:
typing.Optional[typing.Sequence[str]]
— If a JSON object is provided, you can specify which keys you would like to have considered for reranking. The model will rerank based on order of the fields passed in (i.e. rank_fields=['title','author','text'] will rerank using the values in title, author, text sequentially. If the length of title, author, and text exceeds the context length of the model, the chunking will not re-consider earlier fields). If not provided, the model will use the default text field for ranking.
-
return_documents:
typing.Optional[bool]
- If false, returns results without the doc text - the api will return a list of {index, relevance score} where index is inferred from the list passed into the request.
- If true, returns results with the doc text passed in - the api will return an ordered list of {index, text, relevance score} where index + text refers to the list passed into the request.
-
max_chunks_per_doc:
typing.Optional[int]
— The maximum number of chunks to produce internally from a document
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.embed_jobs.list()
-
-
-
The list embed job endpoint allows users to view all embed jobs history for that specific user.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.embed_jobs.list()
-
-
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.embed_jobs.create(...)
-
-
-
This API launches an async Embed job for a Dataset of type
embed-input
. The result of a completed embed job is new Dataset of typeembed-output
, which contains the original text entries and the corresponding embeddings.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.embed_jobs.create( model="model", dataset_id="dataset_id", input_type="search_document", )
-
-
-
model:
str
ID of the embedding model.
Available models and corresponding embedding dimensions:
embed-english-v3.0
: 1024embed-multilingual-v3.0
: 1024embed-english-light-v3.0
: 384embed-multilingual-light-v3.0
: 384
-
dataset_id:
str
— ID of a Dataset. The Dataset must be of typeembed-input
and must have a validation statusValidated
-
input_type:
EmbedInputType
-
name:
typing.Optional[str]
— The name of the embed job.
-
embedding_types:
typing.Optional[typing.Sequence[EmbeddingType]]
Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
"float"
: Use this when you want to get back the default float embeddings. Valid for all models."int8"
: Use this when you want to get back signed int8 embeddings. Valid for only v3 models."uint8"
: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models."binary"
: Use this when you want to get back signed binary embeddings. Valid for only v3 models."ubinary"
: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
-
truncate:
typing.Optional[CreateEmbedJobRequestTruncate]
One of
START|END
to specify how the API will handle inputs longer than the maximum token length.Passing
START
will discard the start of the input.END
will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.embed_jobs.get(...)
-
-
-
This API retrieves the details about an embed job started by the same user.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.embed_jobs.get( id="id", )
-
-
-
id:
str
— The ID of the embed job to retrieve.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.embed_jobs.cancel(...)
-
-
-
This API allows users to cancel an active embed job. Once invoked, the embedding process will be terminated, and users will be charged for the embeddings processed up to the cancellation point. It's important to note that partial results will not be available to users after cancellation.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.embed_jobs.cancel( id="id", )
-
-
-
id:
str
— The ID of the embed job to cancel.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.datasets.list(...)
-
-
-
List datasets that have been created.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.datasets.list()
-
-
-
dataset_type:
typing.Optional[str]
— optional filter by dataset type
-
before:
typing.Optional[dt.datetime]
— optional filter before a date
-
after:
typing.Optional[dt.datetime]
— optional filter after a date
-
limit:
typing.Optional[float]
— optional limit to number of results
-
offset:
typing.Optional[float]
— optional offset to start of results
-
validation_status:
typing.Optional[DatasetValidationStatus]
— optional filter by validation status
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.datasets.create(...)
-
-
-
Create a dataset by uploading a file. See 'Dataset Creation' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.datasets.create( name="name", type="embed-input", )
-
-
-
name:
str
— The name of the uploaded dataset.
-
type:
DatasetType
— The dataset type, which is used to validate the data. Valid types areembed-input
,reranker-finetune-input
,single-label-classification-finetune-input
,chat-finetune-input
, andmulti-label-classification-finetune-input
.
-
data: `from future import annotations
core.File` — See core.File for more documentation
-
keep_original_file:
typing.Optional[bool]
— Indicates if the original file should be stored.
-
skip_malformed_input:
typing.Optional[bool]
— Indicates whether rows with malformed input should be dropped (instead of failing the validation check). Dropped rows will be returned in the warnings field.
-
keep_fields:
typing.Optional[typing.Union[str, typing.Sequence[str]]]
— List of names of fields that will be persisted in the Dataset. By default the Dataset will retain only the required fields indicated in the schema for the corresponding Dataset type. For example, datasets of typeembed-input
will drop all fields other than the requiredtext
field. If any of the fields inkeep_fields
are missing from the uploaded file, Dataset validation will fail.
-
optional_fields:
typing.Optional[typing.Union[str, typing.Sequence[str]]]
— List of names of fields that will be persisted in the Dataset. By default the Dataset will retain only the required fields indicated in the schema for the corresponding Dataset type. For example, Datasets of typeembed-input
will drop all fields other than the requiredtext
field. If any of the fields inoptional_fields
are missing from the uploaded file, Dataset validation will pass.
-
text_separator:
typing.Optional[str]
— Raw .txt uploads will be split into entries using the text_separator value.
-
csv_delimiter:
typing.Optional[str]
— The delimiter used for .csv uploads.
-
dry_run:
typing.Optional[bool]
— flag to enable dry_run mode
-
eval_data: `from future import annotations
typing.Optional[core.File]` — See core.File for more documentation
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.datasets.get_usage()
-
-
-
View the dataset storage usage for your Organization. Each Organization can have up to 10GB of storage across all their users.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.datasets.get_usage()
-
-
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.datasets.get(...)
-
-
-
Retrieve a dataset by ID. See 'Datasets' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.datasets.get( id="id", )
-
-
-
id:
str
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.datasets.delete(...)
-
-
-
Delete a dataset by ID. Datasets are automatically deleted after 30 days, but they can also be deleted manually.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.datasets.delete( id="id", )
-
-
-
id:
str
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.list(...)
-
-
-
Returns a list of connectors ordered by descending creation date (newer first). See 'Managing your Connector' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.list()
-
-
-
limit:
typing.Optional[float]
— Maximum number of connectors to return [0, 100].
-
offset:
typing.Optional[float]
— Number of connectors to skip before returning results [0, inf].
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.create(...)
-
-
-
Creates a new connector. The connector is tested during registration and will cancel registration when the test is unsuccessful. See 'Creating and Deploying a Connector' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.create( name="name", url="url", )
-
-
-
name:
str
— A human-readable name for the connector.
-
url:
str
— The URL of the connector that will be used to search for documents.
-
description:
typing.Optional[str]
— A description of the connector.
-
excludes:
typing.Optional[typing.Sequence[str]]
— A list of fields to exclude from the prompt (fields remain in the document).
-
oauth:
typing.Optional[CreateConnectorOAuth]
— The OAuth 2.0 configuration for the connector. Cannot be specified if service_auth is specified.
-
active:
typing.Optional[bool]
— Whether the connector is active or not.
-
continue_on_failure:
typing.Optional[bool]
— Whether a chat request should continue or not if the request to this connector fails.
-
service_auth:
typing.Optional[CreateConnectorServiceAuth]
— The service to service authentication configuration for the connector. Cannot be specified if oauth is specified.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.get(...)
-
-
-
Retrieve a connector by ID. See 'Connectors' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.get( id="id", )
-
-
-
id:
str
— The ID of the connector to retrieve.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.delete(...)
-
-
-
Delete a connector by ID. See 'Connectors' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.delete( id="id", )
-
-
-
id:
str
— The ID of the connector to delete.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.update(...)
-
-
-
Update a connector by ID. Omitted fields will not be updated. See 'Managing your Connector' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.update( id="id", )
-
-
-
id:
str
— The ID of the connector to update.
-
name:
typing.Optional[str]
— A human-readable name for the connector.
-
url:
typing.Optional[str]
— The URL of the connector that will be used to search for documents.
-
excludes:
typing.Optional[typing.Sequence[str]]
— A list of fields to exclude from the prompt (fields remain in the document).
-
oauth:
typing.Optional[CreateConnectorOAuth]
— The OAuth 2.0 configuration for the connector. Cannot be specified if service_auth is specified.
-
active:
typing.Optional[bool]
-
continue_on_failure:
typing.Optional[bool]
-
service_auth:
typing.Optional[CreateConnectorServiceAuth]
— The service to service authentication configuration for the connector. Cannot be specified if oauth is specified.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.connectors.o_auth_authorize(...)
-
-
-
Authorize the connector with the given ID for the connector oauth app. See 'Connector Authentication' for more information.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.connectors.o_auth_authorize( id="id", )
-
-
-
id:
str
— The ID of the connector to authorize.
-
after_token_redirect:
typing.Optional[str]
— The URL to redirect to after the connector has been authorized.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.models.get(...)
-
-
-
Returns the details of a model, provided its name.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.models.get( model="command-r", )
-
-
-
model:
str
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.models.list(...)
-
-
-
Returns a list of models available for use. The list contains models from Cohere as well as your fine-tuned models.
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.models.list()
-
-
-
page_size:
typing.Optional[float]
Maximum number of models to include in a page Defaults to
20
, min value of1
, max value of1000
.
-
page_token:
typing.Optional[str]
— Page token provided in thenext_page_token
field of a previous response.
-
endpoint:
typing.Optional[CompatibleEndpoint]
— When provided, filters the list of models to only those that are compatible with the specified endpoint.
-
default_only:
typing.Optional[bool]
— When provided, filters the list of models to only the default model to the endpoint. This parameter is only valid whenendpoint
is provided.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.list_finetuned_models(...)
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.list_finetuned_models()
-
-
-
page_size:
typing.Optional[int]
— Maximum number of results to be returned by the server. If 0, defaults to 50.
-
page_token:
typing.Optional[str]
— Request a specific page of the list results.
-
order_by:
typing.Optional[str]
Comma separated list of fields. For example: "created_at,name". The default sorting order is ascending. To specify descending order for a field, append " desc" to the field name. For example: "created_at desc,name".
Supported sorting fields:
- created_at (default)
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.create_finetuned_model(...)
-
-
-
from cohere import Client from cohere.finetuning.finetuning import BaseModel, FinetunedModel, Settings client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.create_finetuned_model( request=FinetunedModel( name="api-test", settings=Settings( base_model=BaseModel( base_type="BASE_TYPE_CHAT", ), dataset_id="my-dataset-id", ), ), )
-
-
-
request:
FinetunedModel
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.get_finetuned_model(...)
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.get_finetuned_model( id="id", )
-
-
-
id:
str
— The fine-tuned model ID.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.delete_finetuned_model(...)
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.delete_finetuned_model( id="id", )
-
-
-
id:
str
— The fine-tuned model ID.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.update_finetuned_model(...)
-
-
-
from cohere import Client from cohere.finetuning.finetuning import BaseModel, Settings client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.update_finetuned_model( id="id", name="name", settings=Settings( base_model=BaseModel( base_type="BASE_TYPE_UNSPECIFIED", ), dataset_id="dataset_id", ), )
-
-
-
id:
str
— FinetunedModel ID.
-
name:
str
— FinetunedModel name (e.g.foobar
).
-
settings:
Settings
— FinetunedModel settings such as dataset, hyperparameters...
-
creator_id:
typing.Optional[str]
— User ID of the creator.
-
organization_id:
typing.Optional[str]
— Organization ID.
-
status:
typing.Optional[Status]
— Current stage in the life-cycle of the fine-tuned model.
-
created_at:
typing.Optional[dt.datetime]
— Creation timestamp.
-
updated_at:
typing.Optional[dt.datetime]
— Latest update timestamp.
-
completed_at:
typing.Optional[dt.datetime]
— Timestamp for the completed fine-tuning.
-
last_used:
typing.Optional[dt.datetime]
— Timestamp for the latest request to this fine-tuned model.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.list_events(...)
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.list_events( finetuned_model_id="finetuned_model_id", )
-
-
-
finetuned_model_id:
str
— The parent fine-tuned model ID.
-
page_size:
typing.Optional[int]
— Maximum number of results to be returned by the server. If 0, defaults to 50.
-
page_token:
typing.Optional[str]
— Request a specific page of the list results.
-
order_by:
typing.Optional[str]
Comma separated list of fields. For example: "created_at,name". The default sorting order is ascending. To specify descending order for a field, append " desc" to the field name. For example: "created_at desc,name".
Supported sorting fields:
- created_at (default)
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-
client.finetuning.list_training_step_metrics(...)
-
-
-
from cohere import Client client = Client( client_name="YOUR_CLIENT_NAME", token="YOUR_TOKEN", ) client.finetuning.list_training_step_metrics( finetuned_model_id="finetuned_model_id", )
-
-
-
finetuned_model_id:
str
— The parent fine-tuned model ID.
-
page_size:
typing.Optional[int]
— Maximum number of results to be returned by the server. If 0, defaults to 50.
-
page_token:
typing.Optional[str]
— Request a specific page of the list results.
-
request_options:
typing.Optional[RequestOptions]
— Request-specific configuration.
-
-