Skip to main content
Version: v2.x

Hugging Face

Hugging Face is a popular platform for hosting and accessing machine learning models. It provides a unified interface and inference API to interact with many state-of-the-art open-source models such as LLAMA3, DeciLM, and more.

This documentation page is about the standard adapter provided by NLUX for Hugging Face Inference API.


Installation

npm install @nlux/hf-react

Usage

import {useChatAdapter} from '@nlux/hf-react';
import {AiChat} from '@nlux/react';

export default App = () => {
const adapter = useChatAdapter({
// Other config options
});

return <AiChat adapter={adapter} />;
};

Response Type and Generics

By default, the Hugging Face adapter assumes that the parsed response returned from the model is a string.
If you provide an output pre-processor that returns a different type, you can specify the type of the response using generics, so it can be used in custom renderers, and other parts of the NLUX library.

import {useChatAdapter} from '@nlux/hf-react';
import {AiChat} from '@nlux/react';

export type CustomDataType = {
response: string;
accuracy: number;
};

export default App = () => {
const adapter = useChatAdapter<CustomDataType>({
model: '<Your Hugging Face Model ID>',
// Other config options
});

return <AiChat<CustomDataType> adapter={adapter} />;
};

Configuration

You can configure the Hugging Face adapter in React by passing a config object to the useChatAdapter() function.
The config object has the following properties:


Model

  • Property: model
  • Type: string
  • Required: Either this property or an Endpoint must be provided
  • Usage:
const adapter = useChatAdapter({
model: '<The ID Of A Model Hosted On Hugging Face>'
});

Endpoint

  • Property: endpoint
  • Type: string
  • Required: Either this property or a Model must be provided
  • Usage:
const adapter = useChatAdapter({
endpoint: 'ENDPOINT_URL'
});

Auth Token

This is the authorization token to use for Hugging Face Inference API. This will be passed to the Authorization header of the HTTP request. If no token is provided, the request will be sent without an Authorization header as in this example:

"Authorization": "Bearer {AUTH_TOKEN}"

Public models on Hugging face do not require an authorization token, but if your model is private, you will need to provide one.

  • Property: authToken
  • Type: string
  • Usage:
const adapter = useChatAdapter({
authToken: 'AUTH_TOKEN'
});

Data Transfer Mode

  • Property: dataTransferMode
  • Type: 'stream' | 'batch'
  • Default: 'stream'
  • Usage:
const adapter = useChatAdapter({
dataTransferMode: 'stream'
});

System Message

The initial system to send to the Hugging Face Inference API. This will be used during the pre-processing step to construct the payload that will be sent to the API.

  • Property: systemMessage
  • Type: string
  • Usage:
const adapter = useChatAdapter({
systemMessage: 'I want you to act as a customer support agent.'
});

Input Pre-Processor

The inputPreProcessor is a function that takes the user input and some additional options, and returns the string that will be sent to the Hugging Face Inference API. It is used to transform the user input to the format expected by the model.

  • Property: preProcessors.input
  • Type: HfInputPreProcessor
  • Usage:
const inputPreProcessor: HfInputPreProcessor = (
input: string,
adapterOptions: Readonly<HfAdapterOptions>,
) => {
// Pre-process the user input
return inputReadyToSend;
};

const adapter = useChatAdapter({
preProcessors: {
input: inputPreProcessor
}
});

NLUX provides a default input pre-processor that can be used with LLAMA2 model on Hugging Face:

import {llama2InputPreProcessor} from '@nlux/hf-react';

const adapter = useChatAdapter({
preProcessors: {
input: llama2InputPreProcessor
}
});

Output Pre-Processor

The outputPreProcessor is a function that takes the response from the model and returns a string that will be displayed to the user.

  • Property: preProcessors.output
  • Type: HfOutputPreProcessor
  • Usage:
const outputPreProcessor: HfOutputPreProcessor = (output: string) => {
// Pre-process the LLM output
return outputReadyToSend;
};

const adapter = useChatAdapter({
preProcessors: {
output: outputPreProcessor
}
});

NLUX provides a default output pre-processor that can be used with LLAMA2 model on Hugging Face:

import {llama2OutputPreProcessor} from '@nlux/hf-react';

const adapter = useChatAdapter({
preProcessors: {
output: llama2OutputPreProcessor
}
});

Max New Tokens

The maximum number of new tokens that can be generated by the Hugging Face Inference API.
This is useful if you want to limit the number of tokens that can be generated by the API.

  • Property: maxNewTokens
  • Type: number
  • Usage:
const adapter = useChatAdapter({
maxNewTokens: 800
});