Skip to main content
Version: v2.x

Custom Adapters

If you're building your own APIs and you would like to use NLUX as the UI for your own AI assistant, you can do so by creating a custom adapter.

There are 2 types of custom adapters that you can create:

  • Streaming Adapters
  • Batch Adapters

Stream Adapters



Stream adapters are used when the API sends responses in a stream (e.g. Server-Sent Events, WebSockets).

The advantage of using a stream adapter is that the chat UI will be updated in real-time while the LLM is still generating text. This is particularly useful if the API takes a long time to process a request and sends responses in a stream. Most major LLM providers (e.g. OpenAI, Llama Index, Hugging Face) support streaming responses.

Stream adapters can be used to send the response as a stream of chunks, as it's being received from the API.
Stream adapters provide a more responsive chat experience.

In order to implement a custom streaming adapter for NLUX, you can use the following hook:

import {useAsStreamAdapter} from '@nlux/react';
useAsStreamAdapter<AiMsg = string>(send: StreamSend)

Where send is a function that developers should implement and pass as a parameter to the hook. It's responsible for sending the prompt to the API and receiving the responses as a stream of chunks.

The send function has the following signature:

export type StreamSend<AiMsg> = (
message: string,
observer: StreamingAdapterObserver<AiMsg>,
extras: ChatAdapterExtras<AiMsg>,
) => void;

It takes 3 parameters:

  • message - The prompt message typed by the user, to be sent to the API.
  • observer - An observer that will receive the responses from the API and pass them to NLUX.
  • extras - An object containing additional information that the adapter might need.

Below is the definition of the StreamingAdapterObserver interface:

interface StreamingAdapterObserver<ChunkType = string> {
next: (chunk: ChunkType) => void;
error: (error: Error) => void;
complete: () => void;
}

When implementing your send, use the observer.next(chunk) method when data is received from the API.
The observer.complete() method should be called when the API has finished sending responses,
and the observer.error() method should be called if an error occurs.

When the adapter is used in the <AiChat adapter={adapter} /> component, NLUX will take care of calling the send method with the appropriate parameters and updating the chat UI with the response, or an error message if the promise is rejected.


Batch Adapters

Batch adapters can be used when the API sends responses in a single request without streaming.
Batch adapters are easier to implement.

In order to implement a custom batch adapter for NLUX, you can use the following hook:

import {useAsBatchAdapter} from '@nlux/react';
const adapter = useAsBatchAdapter(send: BatchSend<AiMsg>)

Where send is a function that developers should implement and pass as a parameter to the hook. It's responsible for sending the prompt to the API and receiving the response.

The send function has the following signature:

export type BatchSend<AiMsg> = (
message: string,
extras: ChatAdapterExtras<AiMsg>,
) => Promise<AiMsg>;

The send method takes 2 parameters:

  • message - The prompt message typed by the user, to be sent to the API.
  • extras - An object containing additional information that the adapter might need.

The send method should return a promise that resolves to the response from the API.

When the adapter is used in the <AiChat adapter={adapter} /> component, NLUX will take care of calling the send method with the appropriate parameters and updating the chat UI with the response, or an error message if the promise is rejected.


Adapter Extras

The ChatAdapterExtras object passed as last parameter to the sendFunction methods, and it contains information that the adapter might need. It has the following properties:

interface ChatAdapterExtras<AiMsg = string> {
// The props that were passed to the AiChat component
aiChatProps: AiChatPropsInEvents<AiMsg>;

// The conversation history
// Available when `conversationOptions.historyPayloadSize`
// is set to a value greater than 0 or to `'max'`
conversationHistory?: ChatItem<AiMsg>[];

// This contains the headers that implementers can use to send additional data such as authentication headers.
headers?: Record<string, string>;
}

The ChatItem type is defined as follows:

export type ChatItem<AiMsg = string> = {
role: 'assistant';
message: AiMsg;
serverResponse?: string | object | undefined;
} | {
role: 'user';
message: string;
} | {
role: 'system';
message: string;
};