Skip to main content
Version: v2.x

LangChain LangServe

LangChain is a popular framework for building backend services powered by large language models. It enables applications that are context-aware. It allows connecting a language model to sources of context such as databases, APIs, and other services. It also relies on language models to reason about the context.

LangServe is a library that comes with LangChain, and that allows developers to build Restful APIs that can be used to interact with language models. A LangServe API provides a set of pre-defined endpoints and an input/out schema that can be used to interact with the language model.

This documentation page is about the standard adapter provided by NLUX for APIs built using LangServe.


Installation

npm install @nlux/langchain-react

Usage

import {useChatAdapter} from '@nlux/langchain-react';
import {AiChat} from '@nlux/react';

export default App = () => {
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
useInputSchema: true,
dataTransferMode: 'stream'
});

return <AiChat adapter={adapter} />;
};

Response Type and Generics

By default, the LangServe adapter assumes that the parsed response from the LangServe API is a string.
If you provide an output pre-processor that returns a different type, you can specify the type of the response using generics, so it can be used in custom renderers, and other parts of the NLUX library.

import {useChatAdapter} from '@nlux/langchain-react';
import {AiChat} from '@nlux/react';

export type CustomDataType = {
response: string;
accuracy: number;
};

export default App = () => {
const adapter = useChatAdapter<CustomDataType>({
url: 'https://<Your LangServe Runnable URL>',
});

return <AiChat<CustomDataType> adapter={adapter} />;
};

Configuration


Runnable URL

This is the URL of the LangServe API.
Example: https://my-langserver-api.example.com/parrot-speaks

Do NOT append the specific LangServe endpoint (such as /invoke /stream or /input_schema) to the URL. The adapter will automatically append the endpoint to the URL based on other configuration.

  • Property: url
  • Type: string
  • Required: true
  • Usage:
const adapter = useChatAdapter({url: 'https://<Your LangServe Runnable URL>'});

Data Transfer Mode

The data transfer mode to use when communicating with the LangServe runnable. NLUX LangServe adapter supports stream and batch modes.

When the adapter is configured to use stream mode, the /stream endpoint of the LangServe API will be used to communicate with the runnable. When the adapter is configured to use batch mode, the /invoke endpoint will be used.

  • Property: dataTransferMode
  • Type: 'stream' | 'batch'
  • Default: 'stream'
  • Usage:
const adapter = useChatAdapter({dataTransferMode: 'batch'});

Input Pre-Processor

The input pre-processor is a function that is called before sending the input to the LangServe API. It can be used to transform the input before sending it to the runnable.

Whatever this function returns will be sent to the runnable under the input property.

Example: If your runnable expects an object with user_prompt and additional_data properties, you can use the following input pre-processor in the example below to transform the input.

  • Property: inputPreProcessor
  • Type: LangServeInputPreProcessor
  • Usage:
const myInputPreProcessor: LangServeInputPreProcessor = (
input: string,
conversationHistory?: readonly ConversationItem[],
): any => {
return {
user_prompt: input,
// Additional data can be added here
additional_data: {
value_1: getDataDynamically(),
value_2: 'some string'
}
}
};

const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
inputPreProcessor: myInputPreProcessor
});

Output Pre-Processor

The output pre-processor is a function that is called after receiving the output from the LangServe API. The output parameter of this function will get the part of the response from the runnable returned under the output property.

This function returns a string that will be used displayed in the NLUX conversational UI.

Example: If your runnable returns an object with value and base properties, you can use the out in the example below to transform the output to a string.

  • Property: outputPreProcessor
  • Type: LangServeOutputPreProcessor
  • Usage:
// Example: We check for a number and we format it before returning it to the user a string
const myOutputPreProcessor: LangServeOutputPreProcessor = (output: any) => {
if (output.value > 1000000000) {
return 'Too big!';
}

return output.value.toString(
output.base === 2 ? 2 : 10
);
};

const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
outputPreProcessor: myOutputPreProcessor
});

LangServe Config

The LangServe config object is a set of key-value pairs that will be sent to the LangServe runnable as part of the request body.

Example: If your runnable expects a session_id and requester_info properties, you can use the config in the example below to send these properties to the runnable.

  • Property: config
  • Type: LangServeConfig
  • Usage:
const adapter = useChatAdapter({
url: 'https://<LangServe Runnable URL>',
config: {
session_id: 'sesh1244',
requester: 'Alan Turing',
}
});

Use Input Schema

When no inputPreProcessor is provided, the LangServe adapter will attempt to call /input_schema endpoint on the LangServe runnable and build the input according to the schema retrieved. Set this option to false to disable this behavior.

If disabled, the input will be sent to the runnable as a string without any transformation.

  • Property: useInputSchema
  • Type: boolean
  • Default: true
  • Usage:
const adapter = useChatAdapter({
url: 'https://<My LangServe Runnable URL>',
useInputSchema: false
});