A Utility Library for Golang Developer
go get github.com/shaharia-lab/guti
And start using like -
import (
"github.com/shaharia-lab/guti"
)
guti.ContainsAll()
The ai
package provides a flexible interface for interacting with various Language Learning Models (LLMs). Currently supports OpenAI's GPT models with an extensible interface for other providers.
import (
"github.com/shaharia-lab/guti/ai"
)
// Create an OpenAI provider
provider := ai.NewOpenAILLMProvider(ai.OpenAIProviderConfig{
APIKey: "your-api-key",
Model: "gpt-3.5-turbo", // Optional, defaults to gpt-3.5-turbo
})
// Create a request with default configuration
request := ai.NewLLMRequest(ai.NewRequestConfig())
// Generate a response
response, err := request.Generate([]LLMMessage{{Role: "user", Text: "What is the capital of France?"}}, provider)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Response: %s\n", response.Text)
fmt.Printf("Input tokens: %d\n", response.TotalInputToken)
fmt.Printf("Output tokens: %d\n", response.TotalOutputToken)
fmt.Printf("Completion time: %.2f seconds\n", response.CompletionTime)
You can customize the LLM request configuration using the functional options pattern:
// Use specific configuration options
config := ai.NewRequestConfig(
ai.WithMaxToken(2000),
ai.WithTemperature(0.8),
ai.WithTopP(0.95),
ai.WithTopK(100),
)
request := ai.NewLLMRequest(config)
The package also supports templated prompts:
template := &ai.LLMPromptTemplate{
Template: "Hello {{.Name}}! Please tell me about {{.Topic}}.",
Data: map[string]interface{}{
"Name": "Alice",
"Topic": "artificial intelligence",
},
}
prompt, err := template.Parse()
if err != nil {
log.Fatal(err)
}
response, err := request.Generate(prompt, provider)
Option | Default | Description |
---|---|---|
MaxToken | 1000 | Maximum number of tokens to generate |
TopP | 0.9 | Nucleus sampling parameter (0-1) |
Temperature | 0.7 | Randomness in output (0-2) |
TopK | 50 | Top-k sampling parameter |
The package provides structured error handling:
response, err := request.Generate(prompt, provider)
if err != nil {
if llmErr, ok := err.(*ai.LLMError); ok {
fmt.Printf("LLM Error %d: %s\n", llmErr.Code, llmErr.Message)
} else {
fmt.Printf("Error: %v\n", err)
}
}
You can implement the LLMProvider
interface to add support for additional LLM providers:
type LLMProvider interface {
GetResponse(messages []LLMMessage, config LLMRequestConfig) (LLMResponse, error)
}
Full documentation is available on pkg.go.dev/github.com/shaharia-lab/guti
This project is licensed under the MIT License - see the LICENSE file for details.