Hugging Face
Hugging Face is a popular platform for hosting and accessing machine learning models. It provides a unified interface and inference API to interact with many state-of-the-art open-source models such as LLAMA2, DeciLM, and more.
This documentation page is about the standard adapter provided by NLUX
for Hugging Face Inference API.
- React JS ⚛️
- JavaScript 🟨
Installation
- react-js
- javascript
- NPM
- Yarn
- PNPM
npm install @nlux/hf-react
yarn add @nlux/hf-react
pnpm add @nlux/hf-react
- NPM
- Yarn
- PNPM
npm install @nlux/hf
yarn add @nlux/hf
pnpm add @nlux/hf
Usage
- react-js
- javascript
import {useChatAdapter} from '@nlux/hf-react';
import {AiChat} from '@nlux/react';
export default App = () => {
const adapter = useChatAdapter({
// Config options
});
return <AiChat adapter={adapter} />;
};
import {createChatAdapter} from '@nlux/hf';
import {createAiChat} from '@nlux/core';
const adapter = createChatAdapter();
const aiChat = createAiChat().withAdapter(adapter);
Configuration
- react-js
- javascript
You can configure the Hugging Face adapter in React by passing a config object to the useChatAdapter()
function.
The config object has the following properties:
You can configure the Hugging Face adapter by chaining the following methods:
Model
- react-js
- javascript
- Property:
model
- Type:
string
- Required: Either this property or an
Endpoint
must be provided - Usage:
const adapter = useChatAdapter({
model: '<The ID Of A Model Hosted On Hugging Face>'
});
- Method:
withModel(model)
- Argument Type:
string
- Required: Either this property or an
Endpoint
must be provided - Usage:
const adapter = createChatAdapter()
.withModel('The ID Of A Model Hosted On Hugging Face');
Endpoint
- react-js
- javascript
- Property:
endpoint
- Type:
string
- Required: Either this property or a
Model
must be provided - Usage:
const adapter = useChatAdapter({
endpoint: 'ENDPOINT_URL'
});
- Method:
withEndpoint(endpoint)
- Argument Type:
string
- Required: Either this property or a
Model
must be provided - Usage:
const adapter = createChatAdapter()
.withEndpoint(endpointUrl);
Auth Token
This is the authorization token to use for Hugging Face Inference API.
This will be passed to the Authorization
header of the HTTP request.
If no token is provided, the request will be sent without an Authorization
header as in this example:
"Authorization": "Bearer {AUTH_TOKEN}"
Public models on Hugging face do not require an authorization token, but if your model is private, you will need to provide one.
- react-js
- javascript
- Property:
authToken
- Type:
string
- Required:
false
- Usage:
const adapter = useChatAdapter({
authToken: 'AUTH_TOKEN'
});
- Method:
withAuthToken(authToken)
- Argument Type:
string
- Required:
false
- Usage:
const adapter = createChatAdapter()
.withAuthToken(authToken);
Data Transfer Mode
- react-js
- javascript
- Property:
dataTransferMode
- Type:
'stream' | 'fetch'
- Required:
false
- Default:
'stream'
- Usage:
const adapter = useChatAdapter({
dataTransferMode: 'stream'
});
- Method:
withDataTransferMode(mode)
- Argument Type:
'stream' | 'fetch'
- Required:
false
- Default:
'stream'
- Usage:
const adapter = createChatAdapter()
.withDataTransferMode('stream');
System Message
The initial system to send to the Hugging Face Inference API. This will be used during the pre-processing step to construct the payload that will be sent to the API.
- react-js
- javascript
- Property:
systemMessage
- Type:
string
- Required:
false
- Usage:
const adapter = useChatAdapter({
systemMessage: 'I want you to act as a customer support agent.'
});
- Method:
withSystemMessage(systemMessage)
- Argument Type:
string
- Required:
false
- Usage:
const adapter = createChatAdapter()
.withSystemMessage('I want you to act as a customer support agent.');
Input Pre-Processor
The inputPreProcessor
is a function that takes the user input and some additional options, and returns
the string that will be sent to the Hugging Face Inference API. It is used to transform the user input
to the format expected by the model.
- react-js
- javascript
- Property:
preProcessors.input
- Type:
HfInputPreProcessor
- Required:
false
- Usage:
const inputPreProcessor: HfInputPreProcessor = (
input: string,
adapterOptions: Readonly<HfAdapterOptions>,
) => {
// Pre-process the user input
return inputReadyToSend;
};
const adapter = useChatAdapter({
preProcessors: {
input: inputPreProcessor
}
});
NLUX
provides a default input pre-processor that can be used with LLAMA2 model on Hugging Face:
import {llama2InputPreProcessor} from '@nlux/hf-react';
const adapter = useChatAdapter({
preProcessors: {
input: llama2InputPreProcessor
}
});
- Method:
withInputPreProcessor(preProcessorFunction)
- Type:
HfInputPreProcessor
- Required:
false
- Usage:
const myInputPreProcessor: HfInputPreProcessor = (
input: string,
adapterOptions: Readonly<HfAdapterOptions>,
) => {
// Pre-process the user input
return inputReadyToSend;
};
const adapter = createChatAdapter()
.withInputPreProcessor(myInputPreProcessor);
NLUX
provides a default input pre-processor that can be used with LLAMA2 model on Hugging Face:
import {llama2InputPreProcessor} from '@nlux/hf';
const adapter = createChatAdapter()
.withInputPreProcessor(llama2InputPreProcessor);
Output Pre-Processor
The outputPreProcessor
is a function that takes the response from the model and returns a string that
will be displayed to the user.
- react-js
- javascript
- Property:
preProcessors.output
- Type:
HfOutputPreProcessor
- Required:
false
- Usage:
const outputPreProcessor: HfOutputPreProcessor = (output: string) => {
// Pre-process the LLM output
return outputReadyToSend;
};
const adapter = useChatAdapter({
preProcessors: {
output: outputPreProcessor
}
});
NLUX
provides a default output pre-processor that can be used with LLAMA2 model on Hugging Face:
import {llama2OutputPreProcessor} from '@nlux/hf-react';
const adapter = useChatAdapter({
preProcessors: {
output: llama2OutputPreProcessor
}
});
- Method:
withOutputPreProcessor(preProcessorFunction)
- Type:
HfOutputPreProcessor
- Required:
false
- Usage:
const myOutputPreProcessor: HfOutputPreProcessor = (output: string) => {
// Pre-process the LLM output
return outputReadyToDisplay;
};
const adapter = createChatAdapter()
.withOutputPreProcessor(myOutputPreProcessor);
NLUX
provides a default output pre-processor that can be used with LLAMA2 model on Hugging Face:
import {llama2OutputPreProcessor} from '@nlux/hf';
const adapter = createChatAdapter()
.withOutputPreProcessor(llama2OutputPreProcessor);
Max New Tokens
The maximum number of new tokens that can be generated by the Hugging Face Inference API.
This is useful if you want to limit the number of tokens that can be generated by the API.
- react-js
- javascript
- Property:
maxNewTokens
- Type:
number
- Required:
false
- Usage:
const adapter = useChatAdapter({
maxNewTokens: 800
});
- Method:
withMaxNewTokens(maxNewTokens)
- Argument Type:
number
- Required:
false
- Usage:
const adapter = createChatAdapter()
.withMaxNewTokens(authToken);