Get Started ― NLUX And Hugging Face Inference API
Hugging Face is a popular platform for sharing and using pre-trained AI models. It provides a wide range of models for various tasks, including text generation tasks that enables building chatbots and other conversational AI applications.
This guide focuses on NLUX UI and assumes some familiarity with Hugging Face's hosted inference API. If you are new to Hugging Face's inference API, you can check out the official documentation for more information.
The advantage of Hugging Face's inference API is that it allows you to access a wide range of models and use them for inference without having to worry about the underlying infrastructure or the model's implementation details.
1. Set up the Hugging Face Inference API
- react-js
- javascript
In this guide, we will use Llama, a state-of-the-art open-access large
language models released by Meta.
To set up Llama on Hugging Face Inference, follow these steps:
- Login to Hugging Face and go to Inference Endpoints page.
- Click on New Endpoint
- Select
meta-llama/Llama-2-7b-chat-hf
in theModel Repository
field. - Select your desired instance configuration.
- For
Endpoint security level
, select Protected or Public based on your requirements. - Click on Create Endpoint.
Wait for the instance to initialize. Once the instance is ready, you can use the endpoint to make inference requests.
- Before moving to the next step, make sure to note down the
Endpoint URL
from the instance details page. - And if you have set the
Endpoint security level
toProtected
, you will need to a generate user access token to authenticate your requests. You can do that in the Access Tokens page.
In this guide, we will use Llama, a state-of-the-art open-access large
language models released by Meta.
To set up Llama on Hugging Face Inference, follow these steps:
- Login to Hugging Face and go to Inference Endpoints page.
- Click on New Endpoint
- Select
meta-llama/Llama-2-7b-chat-hf
in theModel Repository
field. - Select your desired instance configuration.
- For
Endpoint security level
, select Protected or Public based on your requirements. - Click on Create Endpoint.
Wait for the instance to initialize. Once the instance is ready, you can use the endpoint to make inference requests.
- Before moving to the next step, make sure to note down the
Endpoint URL
from the instance details page. - And if you have set the
Endpoint security level
toProtected
, you will need to a generate user access token to authenticate your requests. You can do that in the Access Tokens page.
NLUX is available as a React JS component and hooks, or as a JavaScript library.
The features are identical for both platforms.
Use the version that best suits your needs.
- React JS ⚛️
- JavaScript 🟨
NLUX + React JS
This guide will walk you through the steps to add NLUX conversational capabilities to a React JS app.
It assumes that you already have a React JS app set up.
If you don't have a React JS app set up yet, and you are looking for a quick way to get started, you can use
Vite's react-ts
template to quickly set up a React JS app.
Set up a React JS project with vite
Use the following npm
commands to set up a React JS app with TypeScript using Vite's react-ts
template:
npm create vite@latest my-ai-chat-app -- --template react-ts
cd my-ai-chat-app
npm install
npm run dev
The last command will start the development server and open the app in your default browser.
NLUX + JavaScript
This guide will walk you through the steps to add NLUX conversational capabilities to a vanilla JavaScript app.
It assumes that you already have a JavaScript development environment set up, with support for ES6 and npm modules.
If you don't have a development environment set up, and you are looking for a quick way to get started, you can use
Vite's vanilla-ts
template to quickly set up a
development environment with support for ES6 and npm modules.
Set up a TypeScript project with vite
Use the following npm
commands to set up a new project with the vanilla-ts
template:
npm create vite@latest my-ai-chat-app -- --template vanilla-ts
cd my-ai-chat-app
npm install
npm run dev
The last command will start the development server and open the app in your default browser.
2. Install NLUX Packages
- react-js
- javascript
You can start by adding NLUX to your React JS app using your favorite package manager. At the root of your project, run the following command:
- NPM
- Yarn
- PNPM
npm install @nlux/react @nlux/hf-react
yarn add @nlux/react @nlux/hf-react
pnpm add @nlux/react @nlux/hf-react
This will install the @nlux/react
and @nlux/hf-react
packages.
You can start by adding NLUX to your TypeScript project using your favorite package manager. At the root of your project, run the following command:
- NPM
- Yarn
- PNPM
npm install @nlux/core @nlux/hf
yarn add @nlux/core @nlux/hf
pnpm add @nlux/core @nlux/hf
This will install the @nlux/core
and @nlux/hf
packages.
3. Import Component And Hook
- react-js
- javascript
Import the useChatAdapter
hook and the AiChat
component in your JSX file:
import {AiChat} from '@nlux/react';
import {useChatAdapter} from '@nlux/hf-react';
The AiChat
component is the main chat component that you will use to display the chat UI.
The useChatAdapter
hook is used to create an adapter for the Hugging Face Inference API.
Import the createAiChat
and createChatAdapter
functions from the @nlux/core
and @nlux/hf
packages.
import {createAiChat} from '@nlux/core';
import {createChatAdapter} from '@nlux/hf';
The createAiChat
function will create the main chat component that you will use to display the chat UI.
The createChatAdapter
function is used to create an adapter for the Hugging Face Inference API.
4. Create Hugging Face Adapter
- react-js
- javascript
You can use the useChatAdapter
hook to create a Hugging Face Inference API adapter.
You can optionally import ChatAdapterOptions
from @nlux/hf-react
to define the type of the options object.
import {
ChatAdapterOptions,
useChatAdapter,
llama2InputPreProcessor,
llama2OutputPreProcessor,
} from '@nlux/hf-react';
const adapterOptions: ChatAdapterOptions = {
endpoint: '<YOUR ENDPOINT URL>',
authToken: '<YOUR AUTH TOKEN FOR PROTECTED ENDPOINT>',
preProcessors: {
input: llama2InputPreProcessor,
output: llama2OutputPreProcessor,
}
};
export const App = () => {
const hfAdapter = useChatAdapter(adapterOptions);
}
Please note that the authToken
is optional and only required if the endpoint is protected.
The preProcessors
object is optional and should only be used for models that require input and output pre-processing.
The llama2InputPreProcessor
and llama2OutputPreProcessor
are provided by the NLUX and should be used for the
Llama2 model.
The useChatAdapter
hook takes config parameters and returns an adapter object. Please refer to
the reference documentation for more information on the available options.
You can use the createChatAdapter
function to create a Hugging Face Inference API adapter.
const hfAdapter = createChatAdapter()
.withEndpoint('<YOUR ENDPOINT URL>');
If your Hugging Face endpoint is protected and requires a token, you can pass it as follows:
const hfAdapter = createChatAdapter()
.withEndpoint('<YOUR ENDPOINT URL>')
.withAuthToken('<YOUR TOKEN>');
And because the user and the AI output require transformation, we will use use input and output pre-processors specific to the model that we are using:
import {createChatAdapter, llama2InputPreProcessor, llama2OutputPreProcessor} from '@nlux/hf'
const hfAdapter = createChatAdapter()
.withEndpoint('<YOUR ENDPOINT URL>')
.withAuthToken('<YOUR TOKEN>')
.withInputPreProcessor(llama2InputPreProcessor)
.withOutputPreProcessor(llama2OutputPreProcessor);
The createChatAdapter
function returns an adapter builder that you can be configured by chaining methods.
Please refer to the reference documentation for more information on the available methods.
5. Create Chat Component
- react-js
- javascript
Now that we have the HF Inference API adapter, we will create the chat component and pass the adapter to it.
import {AiChat} from '@nlux/react';
import {useChatAdapter, ChatAdapterOptions} from '@nlux/hf-react';
const adapterOptions: ChatAdapterOptions = {
endpoint: '<YOUR ENDPOINT URL>',
authToken: '<YOUR AUTH TOKEN FOR PROTECTED ENDPOINT>',
};
export const App = () => {
const hfAdapter = useChatAdapter(adapterOptions);
return (
<AiChat
adapter={hfAdapter}
composerOptions={{
placeholder: 'How can I help you today?'
}}
/>
);
};
The AiChat
component can take several parameters:
- The first parameter
adapter
is the only required parameter, and it is the adapter that we created earlier. - The second parameter that we provide here is an object that contains the composer options. In this case, we are
passing a placeholder text
placeholder
to customize the composer.
For full documentation on how to customize the AiChat
component, please refer to the AiChat documentation.
Now that we have the Hugging Face Inference API adapter, we will create the chat component, pass the adapter to it, and mount it to the DOM.
const hfAdapter = createChatAdapter()
.withEndpoint('<YOUR ENDPOINT URL>')
.withAuthToken('<YOUR TOKEN>');
const aiChat = createAiChat().withAdapter(hfAdapter);
document.addEventListener('DOMContentLoaded', () => {
const chatContainer = document.getElementById('chat-container');
aiChat.mount(chatContainer!);
});
The function createAiChat()
returns a component builder that allows you to configure the chat component
by chaining method calls. The withAdapter()
method sets the adapter to be used by the chat component.
Note that aiChat.mount(<domElement>)
should only be called after the DOM has been loaded.
For full documentation on how to customize the aiChat
component, please refer to the AiChat documentation.
6. Add CSS Styles
- react-js
- javascript
NLUX comes with a default CSS theme that you can use to style the chat UI. There are 2 ways to import the stylesheet, depending on your setup.
Using JSX Bundler
You can import it in your JSX component file by installing the @nlux/themes
package:
- NPM
- Yarn
- PNPM
npm install @nlux/themes
yarn add @nlux/themes
pnpm add @nlux/themes
Then import the default theme nova.css
in your React component:
import '@nlux/themes/nova.css';
This will require a CSS bundler such as Vite, or Webpack that's configured to handle CSS imports for global styles. Most modern bundlers are configured to handle CSS imports.
Using HTML Link Tag
Alternatively, you can include the CSS stylesheet in your HTML file.
We provide a CDN link that you can use to include the stylesheet in your HTML file:
<link rel="stylesheet" href="https://content.nlkit.com/nlux/themes/nova.css" />
This CDN link is not meant for production use, and it is only provided for convenience. Make sure you replace it with the latest version of the stylesheet before deploying your app to production.
NLUX comes with a default CSS theme that you can use to style the chat UI. There are 2 ways to import the stylesheet, depending on your setup.
Using JS Bundler
You can import it in your JS module by installing the @nlux/themes
package:
- NPM
- Yarn
- PNPM
npm install @nlux/themes
yarn add @nlux/themes
pnpm add @nlux/themes
Then import the default theme nova.css
in your webpage page or JavaScript/TypeScript file:
import '@nlux/themes/nova.css';
This will require a CSS bundler such as Vite, or Webpack that's configured to handle CSS imports for global styles. Most modern bundlers are configured to handle CSS imports.
Using HTML Link Tag
Alternatively, you can include the CSS stylesheet in your HTML file.
We provide a CDN link that you can use to include the stylesheet in your HTML file:
<link rel="stylesheet" href="https://content.nlkit.com/nlux/themes/nova.css" />
This CDN link is not meant for production use, and it is only provided for convenience. Make sure you replace it with the latest version of the stylesheet before deploying your app to production.
7. Run Your App
- react-js
- javascript
Once you have configured all of the above, your code will look like this:
import {AiChat} from '@nlux/react';
import {useUnsafeChatAdapter, ChatAdapterOptions} from '@nlux/openai-react';
import '@nlux/themes/nova.css';
const adapterOptions: ChatAdapterOptions = {
apiKey: 'your-openai-api-key-here',
model: 'gpt-3.5-turbo',
systemMessage: 'Act as a helpful assistant and be funny and engaging.',
};
export const App = () => {
const openAiAdapter = useUnsafeChatAdapter(adapterOptions);
return (
<AiChat
adapter={openAiAdapter}
composerOptions={{
placeholder: 'How can I help you today?'
}}
/>
);
};
You can now run your app and test the chatbot.
The result is a fully functional chatbot UI:
And NLUX is handling all the UI interactions and the communication with the OpenAI API.
Once you have configured all of the above, your code will look like this:
import {createAiChat} from '@nlux/core';
import {createUnsafeChatAdapter} from '@nlux/langchain';
import '@nlux/themes/nova.css';
const openAiAdapter = createUnsafeChatAdapter()
.withApiKey('your-openai-api-key-here')
.withModel('gpt-3.5-turbo')
.withSystemMessage('Act as a helpful assistant and be funny and engaging.');
const aiChat = createAiChat().withAdapter(openAiAdapter);
document.addEventListener('DOMContentLoaded', () => {
const chatContainer = document.getElementById('chat-container');
aiChat.mount(chatContainer!);
});
You can now run your app and test the chatbot.
The result is a fully functional chatbot UI:
And NLUX is handling all the UI interactions and the communication with the OpenAI API.