LangChain LangServe
LangChain is a popular framework for building backend services powered by large language models. It enables applications that are context-aware. It allows connecting a language model to sources of context such as databases, APIs, and other services. It also relies on language models to reason about the context.
LangServe is a library that comes with LangChain, and that allows developers to build Restful APIs that can be used to interact with language models. A LangServe API provides a set of pre-defined endpoints and an input/out schema that can be used to interact with the language model.
This documentation page is about the standard adapter provided by NLUX for APIs built using LangServe.
Installation
- React JS ⚛️
- JavaScript 🟨
- NPM
- Yarn
- PNPM
npm install @nlux/langchain-react
yarn add @nlux/langchain-react
pnpm add @nlux/langchain-react
- NPM
- Yarn
- PNPM
npm install @nlux/langchain
yarn add @nlux/langchain
pnpm add @nlux/langchain
Usage
- React JS ⚛️
- JavaScript 🟨
import {useChatAdapter} from '@nlux/langchain-react';
import {AiChat} from '@nlux/react';
export default App = () => {
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
useInputSchema: true,
dataTransferMode: 'stream'
});
return <AiChat adapter={adapter} />;
};
import {createChatAdapter} from '@nlux/langchain';
import {createAiChat} from '@nlux/core';
const adapter = createChatAdapter().withUrl('https://<Your LangServe Runnable URL>');
const aiChat = createAiChat().withAdapter(adapter);
Response Type and Generics
By default, the LangServe adapter assumes that the parsed response from the LangServe API is a string.
If you provide an output pre-processor that returns a different type, you can specify the type of the response using
generics, so it can be used in custom renderers, and other parts of the NLUX library.
- React JS ⚛️
- JavaScript 🟨
import {useChatAdapter} from '@nlux/langchain-react';
import {AiChat} from '@nlux/react';
export type CustomDataType = {
response: string;
accuracy: number;
};
export default App = () => {
const adapter = useChatAdapter<CustomDataType>({
url: 'https://<Your LangServe Runnable URL>',
});
return <AiChat<CustomDataType> adapter={adapter} />;
};
import {createChatAdapter} from '@nlux/langchain';
import {createAiChat} from '@nlux/core';
export type CustomDataType = {
response: string;
accuracy: number;
};
const adapter = createChatAdapter<CustomDataType>()
.withUrl('https://<Your LangServe Runnable URL>');
const aiChat = createAiChat<CustomDataType>().withAdapter(adapter);
Configuration
Runnable URL
This is the URL of the LangServe API.
Example: https://my-langserver-api.example.com/parrot-speaks
Do NOT append the specific LangServe endpoint (such as /invoke /stream or /input_schema) to the URL.
The adapter will automatically append the endpoint to the URL based on other configuration.
- React JS ⚛️
- JavaScript 🟨
- Property:
url- Type:
string- Required:
true- Usage:
const adapter = useChatAdapter({url: 'https://<Your LangServe Runnable URL>'});
- Method:
withUrl(runnableUrl)- Type:
string- Required:
true- Usage:
const adapter = createChatAdapter().withUrl('https://<Your LangServe Runnable URL>');
Data Transfer Mode
The data transfer mode to use when communicating with the LangServe runnable.
NLUX LangServe adapter supports stream and batch modes.
When the adapter is configured to use stream mode, the /stream endpoint of the LangServe API will be used to
communicate with the runnable. When the adapter is configured to use batch mode, the /invoke endpoint will be used.
- React JS ⚛️
- JavaScript 🟨
- Property:
dataTransferMode- Type:
'stream' | 'batch'- Default:
'stream'- Usage:
const adapter = useChatAdapter({dataTransferMode: 'batch'});
- Method:
withDataTransferMode(mode)- Type:
'stream' | 'batch'- Default:
'stream'- Usage:
const adapter = createChatAdapter().withDataTransferMode('batch');
Input Pre-Processor
The input pre-processor is a function that is called before sending the input to the LangServe API. It can be used to transform the input before sending it to the runnable.
Whatever this function returns will be sent to the runnable under the input property.
Example: If your runnable expects an object with user_prompt and additional_data properties, you can use the
following input pre-processor in the example below to transform the input.
- React JS ⚛️
- JavaScript 🟨
- Property:
inputPreProcessor- Type:
LangServeInputPreProcessor- Usage:
const myInputPreProcessor: LangServeInputPreProcessor = (
input: string,
conversationHistory?: readonly ConversationItem[],
): any => {
return {
user_prompt: input,
// Additional data can be added here
additional_data: {
value_1: getDataDynamically(),
value_2: 'some string'
}
}
};
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
inputPreProcessor: myInputPreProcessor
});
- Method:
withInputPreProcessor(inputPreProcessor)- Type:
LangServeInputPreProcessor- Usage:
const myInputPreProcessor: LangServeInputPreProcessor = (
input: string,
conversationHistory?: readonly ConversationItem[],
): any => {
return {
user_prompt: input,
// Additional data can be added here
additional_data: {
value_1: getDataDynamically(),
value_2: 'some string'
}
}
};
const adapter = createChatAdapter()
.withUrl('https://<My LangServe Runnable URL>')
.withInputPreProcessor(myInputPreProcessor);
Output Pre-Processor
The output pre-processor is a function that is called after receiving the output from the LangServe API.
The output parameter of this function will get the part of the response from the runnable returned
under the output property.
This function returns a string that will be used displayed in the NLUX conversational UI.
Example: If your runnable returns an object with value and base properties, you can use the out in the
example below to transform the output to a string.
- React JS ⚛️
- JavaScript 🟨
- Property:
outputPreProcessor- Type:
LangServeOutputPreProcessor- Usage:
// Example: We check for a number and we format it before returning it to the user a string
const myOutputPreProcessor: LangServeOutputPreProcessor = (output: any) => {
if (output.value > 1000000000) {
return 'Too big!';
}
return output.value.toString(
output.base === 2 ? 2 : 10
);
};
const adapter = useChatAdapter({
url: 'https://<Your LangServe Runnable URL>',
outputPreProcessor: myOutputPreProcessor
});
- Method:
withOutputPreProcessor(outputPreProcessor)- Type:
LangServeOutputPreProcessor- Usage:
// Example: We check for a number and we format it before returning it to the user a string
const myOutputPreProcessor: LangServeOutputPreProcessor = (output: any) => {
if (output.value > 1000000000) {
return 'Too big!';
}
return output.value.toString(
output.base === 2 ? 2 : 10
);
};
const adapter = createChatAdapter()
.withUrl('https://<My LangServe Runnable URL>')
.withOutputPreProcessor(myOutputPreProcessor);
LangServe Config
The LangServe config object is a set of key-value pairs that will be sent to the LangServe runnable as part of the request body.
Example: If your runnable expects a session_id and requester_info properties, you can use the config in the
example below to send these properties to the runnable.
- React JS ⚛️
- JavaScript 🟨
- Property:
config- Type:
LangServeConfig- Usage:
const adapter = useChatAdapter({
url: 'https://<LangServe Runnable URL>',
config: {
session_id: 'sesh1244',
requester: 'Alan Turing',
}
});
- Method:
withConfig(config)- Type:
LangServeConfig- Usage:
const adapter = createChatAdapter()
.withUrl('https://<LangServe Runnable URL>')
.withConfig({
session_id: 'sesh1244',
requester: 'Alan Turing',
});
Use Input Schema
When no inputPreProcessor is provided, the LangServe adapter will attempt to call /input_schema
endpoint on the LangServe runnable and build the input according to the schema retrieved.
Set this option to false to disable this behavior.
If disabled, the input will be sent to the runnable as a string without any transformation.
- React JS ⚛️
- JavaScript 🟨
- Property:
useInputSchema- Type:
boolean- Default:
true- Usage:
const adapter = useChatAdapter({
url: 'https://<My LangServe Runnable URL>',
useInputSchema: false
});
- Method:
withInputSchema(value)- Argument Type:
boolean- Default:
true- Usage:
const adapter = createChatAdapter()
.withUrl('https://<My LangServe Runnable URL>')
.withInputSchema(false);