Building an AI Chatbot with Llama and NLUX
In this section, we'll show you how to build an AI chatbot with Llama (hosted on Hugging Face) and NLUX.
- React JS ⚛️
- JavaScript 🟨
How to connect to Llama 2 on Hugging Face
Installation
- NPM
- Yarn
- PNPM
npm install @nlux/hf-react
yarn add @nlux/hf-react
pnpm add @nlux/hf-react
Usage
import {
    useChatAdapter,
    llama2InputPreProcessor,
    llama2OutputPreProcessor,
} from '@nlux/hf-react';
const Component = () => {
    const adapter = useChatAdapter({
        dataTransferMode: 'stream',
        model: '<MODEL NAME OR URL>',
        systemMessage: '<SYSTEM MESSAGE FOR LLAMA 2>',
        preProcessors: {
            input: llama2InputPreProcessor,
            output: llama2OutputPreProcessor,
        },
        maxNewTokens: 800,
    });
    return (
        <AiChat adapter={adapter} />
    );
}
How to connect to Llama 2 on Hugging Face
Installation
- NPM
- Yarn
- PNPM
npm install @nlux/hf
yarn add @nlux/hf
pnpm add @nlux/hf
Usage
import {
    createAdapter,
    llama2InputPreProcessor,
    llama2OutputPreProcessor,
} from '@nlux/hf';
const adapter = createAdapter()
    .dataTransferMode('stream')
    .withModel('<HUGGING FACE MODEL NAME>')
    .withMaxNewTokens(800)
    .withInputPreProcessor(llama2InputPreProcessor)
    .withOutputPostProcessor(llama2OutputPreProcessor);
const aiChat = createAiChat().withAdapter(adapter);
aiChat.mount(rootElement);