Releases: Aleph-Alpha/aleph-alpha-client
v3.1.0
What's Changed
3.1.0
Features
New .explain()
method 🎉
Better understand the source of a completion, specifically on how much each section of a prompt impacts the completion.
To get started, you can simply pass in a prompt you used with a model and the completion the model gave and generate an explanation:
from aleph_alpha_client import Client, CompletionRequest, ExplanationRequest, Prompt
client = Client(token=os.environ["AA_TOKEN"])
prompt = Prompt.from_text("An apple a day, ")
model_name = "luminous-extended"
# create a completion request
request = CompletionRequest(prompt=prompt, maximum_tokens=32)
response = client.complete(request, model=model_name)
# generate an explanation
request = ExplanationRequest(prompt=prompt, target=response.completions[0].completion)
response = client.explain(request, model=model_name)
To visually see the results, you can also use this in our Playground.
We also have more documentation and examples available for you to read.
AtMan (Attention Manipulation)
Under the hood, we are leveraging the method from our AtMan paper to help generate these explanations. And we've also exposed these controls anywhere you can submit us a prompt!
So if you have other use cases for attention manipulation, you can pass these AtMan controls as part of your prompt items.
from aleph_alpha_client import Prompt, Text, TextControl
Prompt([
Text("Hello, World!", controls=[TextControl(start=0, length=5, factor=0.5)]),
Image.from_url(
"https://cdn-images-1.medium.com/max/1200/1*HunNdlTmoPj8EKpl-jqvBA.png",
controls=[ImageControl(top=0.25, left=0.25, height=0.5, width=0.5, factor=2.0)]
)
])
For more information, check out our documentation and examples.
Full Changelog: v3.0.0...v3.1.0
v3.0.0
What's Changed
Breaking Changes
- Removed deprecated
AlephAlphaClient
andAlephAlphaModel
. UseClient
orAsyncClient
instead. - Removed deprecated
ImagePrompt
. ImportImage
instead for image prompt items. - New Q&A interface. We've improved the Q&A implementation, and most parameters are no longer needed.
- You only need to specify your documents, a query, and (optional) the max number of answers you want to receive.
- You no longer specify a model.
- Removed "model" parameter from summarize method
- Removed "model_version" from SummarizationResponse
Full Changelog: v2.17.0...v3.0.0
v2.17.0
Features
- Allow specifying token overlap behavior in AtMan by @benbrandt in #106
Bug Fixes
- Better handle case when Prompt is supplied a string instead of a list by @benbrandt in #107
Experimental
Update Readme and align http proxy behavior for AsyncClient
What's Changed
- Readme update by @capsenz in #95
- Respect http(s)_proxy env vars in AsyncClient (like in Client) by @L3viathan in #96
New Contributors
- @L3viathan made their first contribution in #96
Full Changelog: v2.16.0...v2.16.1
v2.16.0
- Add
Image.from_image_source
to create Image from variety of sources
New Contributors
Full Changelog: v2.15.0...v2.16.0
v2.15.0
-
Add completion parameters:
- repetition_penalties_include_completion
- raw_completion
See respective documentation for details.
-
Make deserialization of response json forward compatible (i.e. ignore additional fields)
Full Changelog: v2.14.0...v2.15.0
v2.14.0: Attention Manipulation for Images
What's Changed
Full Changelog: v2.13.0...v2.14.0
v2.13.0
- Add support for text attention manipulation
Full Changelog: v2.12.0...v2.13.0
Introduce instantiation of offline tokenizer
What's Changed
-
Introduce offline tokenizer
-
Add method models to Client and AsyncClient to list available models
-
Fix docstrings for complete methods with respect to Prompt construction
-
Minor docstring fix for evaulate methods
Full Changelog: v2.11.1...v2.12.0
v2.11.1
Fix for deprecated AlephAlphaClient.complete
Full Changelog: v2.11.0...v2.11.1