Skip to main content
Version: v1.x

LangChain LangServe

LangChain is a popular framework for building backend services powered by large language models. It enables applications that are context-aware. It allows connecting a language model to sources of context such as databases, APIs, and other services. It also relies on language models to reason about the context.

LangServe is a library that comes with LangChain, and that allows developers to build Restful APIs that can be used to interact with language models. A LangServe API provides a set of pre-defined endpoints and an input/out schema that can be used to interact with the language model.

This documentation page is about the standard adapter provided by NLUX for APIs built using LangServe.

Installation

npm install @nlux/langchain-react

Usage

import {useChatAdapter} from '@nlux/langchain-react';
import {AiChat} from '@nlux/react';

export default App = () => {
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
});

return <AiChat adapter={adapter} />;
};

Configuration

You can configure the LangServe adapter in React by passing a config object to the useChatAdapter() function.
The config object has the following properties:


Runnable URL

This is the URL of the LangServe API.
Example: https://my-langserver-api.example.com/parrot-speaks

Do NOT append the specific LangServe endpoint (such as /invoke /stream or /input_schema) to the URL. The adapter will automatically append the endpoint to the URL based on other configuration.

  • Property: url
  • Type: string
  • Required: true
  • Usage:
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>'
});

Data Transfer Mode

The data transfer mode to use when communicating with the LangServe runnable. NLUX LangServe adapter supports stream and fetch modes.

When the adapter is configured to use stream mode, the /stream endpoint of the LangServe API will be used to communicate with the runnable. When the adapter is configured to use fetch mode, the /invoke endpoint will be used.

  • Property: dataTransferMode
  • Type: 'stream' | 'fetch'
  • Required: false
  • Default: 'stream'
  • Usage:
const adapter = useChatAdapter({
dataTransferMode: 'fetch'
});

Input Pre-Processor

The input pre-processor is a function that is called before sending the input to the LangServe API. It can be used to transform the input before sending it to the runnable.

Whatever this function returns will be sent to the runnable under the input property.

Example: If your runnable expects an object with user_prompt and additional_data properties, you can use the following input pre-processor in the example below to transform the input.

  • Property: inputPreProcessor
  • Type: LangServeInputPreProcessor
  • Required: false
  • Usage:
const myInputPreProcessor: LangServeInputPreProcessor = (
input: string,
conversationHistory?: readonly ConversationItem[],
): any => {
return {
user_prompt: input,
// Additional data can be added here
additional_data: {
value_1: getDataDynamically(),
value_2: 'some string'
}
}
};

const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
inputPreProcessor: myInputPreProcessor
});

Output Pre-Processor

The output pre-processor is a function that is called after receiving the output from the LangServe API. The output parameter of this function will get the part of the response from the runnable returned under the output property.

This function returns a string that will be used displayed in the NLUX conversational UI.

Example: If your runnable returns an object with value and base properties, you can use the out in the example below to transform the output to a string.

  • Property: outputPreProcessor
  • Type: LangServeOutputPreProcessor
  • Required: false
  • Usage:
// Example: We check for a number and we format it before returning it to the user a string
const myOutputPreProcessor: LangServeOutputPreProcessor = (output: any) => {
if (output.value > 1000000000) {
return 'Too big!';
}

return output.value.toString(
output.base === 2 ? 2 : 10
);
};

const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
outputPreProcessor: myOutputPreProcessor
});

Use Input Schema

When no inputPreProcessor is provided, the LangServe adapter will attempt to call /input_schema endpoint on the LangServe runnable and build the input according to the schema retrieved. Set this option to false to disable this behavior.

If disabled, the input will be sent to the runnable as a string without any transformation.

  • Property: useInputSchema
  • Type: boolean
  • Required: false
  • Default: true
  • Usage:
const adapter = useChatAdapter({
url: 'https://<My LangServe Runnable URL>',
useInputSchema: false
});