Skip to main content
Version: v1.x

Get Started ― NLUX And Hugging Face Inference API

Hugging Face is a popular platform for sharing and using pre-trained AI models. It provides a wide range of models for various tasks, including text generation tasks that enables building chatbots and other conversational AI applications.

This guide focuses on NLUX UI and assumes some familiarity with Hugging Face's hosted inference API. If you are new to Hugging Face's inference API, you can check out the official documentation for more information.

The advantage of Hugging Face's inference API is that it allows you to access a wide range of models and use them for inference without having to worry about the underlying infrastructure or the model's implementation details.


1. Set up the Hugging Face Inference API

In this guide, we will use Llama 2, a state-of-the-art open-access large language models released by Meta.
To set up Llama 2 on Hugging Face Inference, follow these steps:

  1. Login to Hugging Face and go to Inference Endpoints page.
  2. Click on New Endpoint
  3. Select meta-llama/Llama-2-7b-chat-hf in the Model Repository field.
  4. Select your desired instance configuration.
  5. For Endpoint security level, select Protected or Public based on your requirements.
  6. Click on Create Endpoint.

Wait for the instance to initialize. Once the instance is ready, you can use the endpoint to make inference requests.

  • Before moving to the next step, make sure to note down the Endpoint URL from the instance details page.
  • And if you have set the Endpoint security level to Protected, you will need to a generate user access token to authenticate your requests. You can do that in the Access Tokens page.

NLUX is available as a React JS component and hooks, or as a Javascript library.
The features are identical for both platforms. Use the version that best suits your needs.


NLUX + React JS

This guide will walk you through the steps to add NLUX conversational capabilities to a React JS app.
It assumes that you already have a React JS app set up.

If you don't have a React JS app set up yet, and you are looking for a quick way to get started, you can use Vite's react-ts template to quickly set up a React JS app.

Set up a React JS project with vite

Use the following npm commands to set up a React JS app with Typescript using Vite's react-ts template:

npm create vite@latest my-ai-chat-app -- --template react-ts
cd my-ai-chat-app
npm install
npm run dev

The last command will start the development server and open the app in your default browser.


2. Install NLUX Packages

You can start by adding NLUX to your React JS app using your favorite package manager. At the root of your project, run the following command:

npm install @nlux/react @nlux/hf-react

This will install the @nlux/react and @nlux/hf-react packages.


3. Import Component And Hook

Import the useChatAdapter hook and the AiChat component in your JSX file:

import {AiChat} from '@nlux/react';
import {useChatAdapter} from '@nlux/hf-react';

The AiChat component is the main chat component that you will use to display the chat UI.
The useChatAdapter hook is used to create an adapter for the Hugging Face Inference API.


4. Create Hugging Face Adapter

You can use the useChatAdapter hook to create a Hugging Face Inference API adapter.
You can optionally import ChatAdapterOptions from @nlux/hf-react to define the type of the options object.

import {
ChatAdapterOptions,
useChatAdapter,
llama2InputPreProcessor,
llama2OutputPreProcessor,
} from '@nlux/hf-react';

const adapterOptions: ChatAdapterOptions = {
endpoint: '<YOUR ENDPOINT URL>',
authToken: '<YOUR AUTH TOKEN FOR PROTECTED ENDPOINT>',
preProcessors: {
input: llama2InputPreProcessor,
output: llama2OutputPreProcessor,
}
};

export const App = () => {
const hfAdapter = useChatAdapter(adapterOptions);
}

Please note that the authToken is optional and only required if the endpoint is protected.

The preProcessors object is optional and should only be used for models that require input and output pre-processing. The llama2InputPreProcessor and llama2OutputPreProcessor are provided by the NLUX and should be used for the Llama2 model.

The useChatAdapter hook takes config parameters and returns an adapter object. Please refer to the reference documentation for more information on the available options.


5. Create Chat Component

Now that we have the HF Inference API adapter, we will create the chat component and pass the adapter to it.

import {AiChat} from '@nlux/react';
import {useChatAdapter, ChatAdapterOptions} from '@nlux/hf-react';

const adapterOptions: ChatAdapterOptions = {
endpoint: '<YOUR ENDPOINT URL>',
authToken: '<YOUR AUTH TOKEN FOR PROTECTED ENDPOINT>',
};

export const App = () => {
const hfAdapter = useChatAdapter(adapterOptions);

return (
<AiChat
adapter={hfAdapter}
promptBoxOptions={{
placeholder: 'How can I help you today?'
}}
/>
);
};

The AiChat component can take several parameters:

  • The first parameter adapter is the only required parameter, and it is the adapter that we created earlier.
  • The second parameter that we provide here is an object that contains the prompt box options. In this case, we are passing a placeholder text placeholder to customize the prompt box.

For full documentation on how to customize the AiChat component, please refer to the AiChat documentation.


6. Add CSS Styles

NLUX comes with a default CSS theme that you can use to style the chat UI. There are 2 ways to import the stylesheet, depending on your setup.

Using JSX Bundler

You can import it in your JSX component file by installing the @nlux/themes package:

npm install @nlux/themes

Then import the default theme nova.css in your React component:

import '@nlux/themes/nova.css';

This will require a CSS bundler such as Vite, or Webpack that's configured to handle CSS imports for global styles. Most modern bundlers are configured to handle CSS imports.

Alternatively, you can include the CSS stylesheet in your HTML file.
We provide a CDN link that you can use to include the stylesheet in your HTML file:

<link rel="stylesheet" href="https://themes.nlux.ai/v1.0.0/nova.css" />

This CDN link is not meant for production use, and it is only provided for convenience. Make sure you replace it with the latest version of the stylesheet before deploying your app to production.


7. Run Your App

Once you have configured all of the above, your code will look like this:

import {AiChat} from '@nlux/react';
import {useUnsafeChatAdapter, ChatAdapterOptions} from '@nlux/openai-react';
import '@nlux/themes/nova.css';

const adapterOptions: ChatAdapterOptions = {
apiKey: 'your-openai-api-key-here',
model: 'gpt-3.5-turbo',
systemMessage: 'Act as a helpful assistant and be funny and engaging.',
};

export const App = () => {
const openAiAdapter = useUnsafeChatAdapter(adapterOptions);

return (
<AiChat
adapter={openAiAdapter}
promptBoxOptions={{
placeholder: 'How can I help you today?'
}}
/>
);
};

You can now run your app and test the chatbot.
The result is a fully functional chatbot UI:

And NLUX is handling all the UI interactions and the communication with the OpenAI API.