Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangChain support for Orchestration Client #176

Open
kay-schmitteckert opened this issue Sep 25, 2024 · 1 comment
Open

LangChain support for Orchestration Client #176

kay-schmitteckert opened this issue Sep 25, 2024 · 1 comment
Labels
feature request New feature or request

Comments

@kay-schmitteckert
Copy link

kay-schmitteckert commented Sep 25, 2024

Describe the Problem

The Orchestration Client already simplifies the process of developing and kickstarting GenAI projects as well as communicating with foundation models. Now, LangChain support for OpenAI is available, and instead of writing a separate wrapper for each vendor, the idea is to proceed with a LangChain wrapper for the Orchestration Client, which would broadly cover everything.

Propose a Solution

LangChain wrapper for Orchestration client, e.g.:

import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";
import type { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";

import { OrchestrationClient, OrchestrationModuleConfig, ChatMessages } from "@sap-ai-sdk/orchestration";

export interface CustomLLMInput extends BaseLLMParams {
    deploymentId: string;
    resourceGroup?: string;
    modelName: string;
    modelParams?: {};
    modelVersion?: string;
}

export class GenerativeAIHubCompletion extends LLM {
    deploymentId: string;
    resourceGroup: string;
    modelName: string;
    modelParams: {};
    modelVersion: string;

    constructor(fields: CustomLLMInput) {
        super(fields);
        this.deploymentId = fields.deploymentId;
        this.resourceGroup = fields.resourceGroup || "default";
        this.modelName = fields.modelName;
        this.modelParams = fields.modelParams || {};
        this.modelVersion = fields.modelVersion || "latest";
    }

    _llmType() {
        return "Generative AI Hub - Orchestration Service";
    }

    async _call(
        prompt: string,
        options: this["ParsedCallOptions"],
        runManager: CallbackManagerForLLMRun
    ): Promise<string> {
       
        // Configuration & Prompt
        const llmConfig = {
            model_name: this.modelName,
            model_params: this.modelParams,
            model_version: this.modelVersion
        };

        const messages: ChatMessages = [{ role: "user", content: "{{?prompt}}?" }];
        const config: OrchestrationModuleConfig = {
            templating: {
                template: [{ role: "user", content: "{{?prompt}}" }]
            },
            llm: llmConfig
        };

        // Orchestration Client
        const orchestrationClient = new OrchestrationClient(config, {
            resourceGroup: this.resourceGroup
            //deploymentId: this.deploymentId
        });

        // Call the orchestration service.
        const response = await orchestrationClient.chatCompletion({
            inputParams: { prompt }
        });
        // Access the response content.
        return response.getContent();
    }
}

Describe Alternatives

No response

Affected Development Phase

Getting Started

Impact

Inconvenience

Timeline

No response

Additional Context

No response

@ZhongpinWang ZhongpinWang added the feature request New feature or request label Sep 25, 2024
@jjtang1985
Copy link
Contributor

Thank you very much for raising up this feature request.
Orchestration service contains a list of the following modules in a pipeline:

  • LLM access itself via templating module
    • The harmonised API allows using LLMs from different vendors
  • Content filtering
  • Data Masking
  • Grounding (coming soon)
  • ...

Therefore, it's more than a service for LLM access.

If we want to make an adapter like what we do for our LangChain package, users would be able to use the original LangChain APIs with SAP GenAI Hub.

However, LangChain knows nothing about other orchestration modules beyonds the LLM access, so we need some API extensions for e.g.,:

  • configuring content filtering
  • response/error handling for content filtering, with a new API design, which might not look like a LangChain client?

This seems to be a big epic.
I would like to understand more details of the use cases, so we might be able to split the task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants