LangServe Endpoints And The Data Transfer Mode
LangServe Endpoints
When a runnable chain is served via LangServe, several endpoints are available for interacting with it. As of version v0.0.41, a total of 8 endpoints are available for each runnable chain: 5 endpoints for invoking and streaming the runnable, and 3 endpoints for getting the schema of the runnable.
Those endpoints are not always exposed. Developers can opt to expose only the endpoints they need for their use case. For more information on how to LangServe works and how to expose endpoints, please refer to the LangServe Documentation.
Endpoints Consumed by NLUX
In order to build an AI chatbot with NLUX
, your LangServe API should expose one (or both) of the following endpoints:
POST /my_runnable/invoke
- Invoke the runnable on a single input and return the output in a single response.POST /my_runnable/stream
- Invoke the runnable on a single input and stream the output as it is generated.
The following endpoint can be used by NLUX
when it's available, but it's not mandatory:
GET /my_runnable/input_schema
- JSON schema for input to the runnable.
Runnable URL Configuration
The url
property is the only mandatory property for the LangServe adapter.
It should include the URL to your LangServe runnable API without any endpoint at the end.
Example:
- Good ✅ ―
https://pynlux.api.nlux.ai/pirate-speak
- Bad ❌ ―
https://pynlux.api.nlux.ai/pirate-speak/invoke
- React JS ⚛️
- JavaScript 🟨
import {useChatAdapter} from '@nlux/langchain-react';
const adapter = useChatAdapter({
url: 'https://pynlux.api.nlux.ai/pirate-speak'
});
import {createAdapter} from '@nlux/langchain';
const adapter = createAdapter().withUrl('https://pynlux.api.nlux.ai/pirate-speak');
Data Transfer Mode
The default data transfer mode for NLUX
LangServe adapter is stream
.
This means that the output is streamed as it is generated by the runnable. This allows for faster response times and
better user experience, as the user can see the output as it is generated.
If your endpoint only exposes the POST /my_runnable/invoke
endpoint, you can still use the NLUX
LangServe adapter,
but you will need to set the data transfer mode to fetch
.
The table below summarizes the data transfer, the endpoints that should be exposed, and the expected behavior:
Data Transfer Mode | LangServe Endpoint | Expected Behavior |
---|---|---|
stream (default) | POST /my_runnable/stream | Output is streamed as it is generated by the runnable. |
fetch | POST /my_runnable/invoke | Output is returned in a single response once the runnable has finished processing the input. |
In order for the NLUX
LangServe adapter to work in stream
mode, the LangServe API should expose the
POST
/my_runnable/stream
endpoint. If your LangServe API only exposes the POST
/my_runnable/invoke
endpoint,
you should set the data transfer mode to fetch
.
Example: LangServe Adapter With fetch
Data Transfer Mode
The example below shows how to use the NLUX
LangServe adapter to connect to a LangServe API.
- LangServe Runnable URL used:
https://pynlux.api.nlux.ai/pirate-speak
- Data transfer mode used:
fetch
― The reply is returned in a single response once the runnable has finished processing the input (longer response time).
import {AiChat} from '@nlux/react'; import {useChatAdapter} from '@nlux/langchain-react'; import '@nlux/themes/nova.css'; export default () => { // LangServe adapter that connects to a demo LangChain Runnable API const adapter = useChatAdapter({ url: 'https://pynlux.api.nlux.ai/pirate-speak', dataTransferMode: 'fetch' }); return ( <AiChat adapter={adapter} personaOptions={{ bot: { name: 'FeatherBot', picture: 'https://nlux.ai/images/demos/persona-feather-bot.png', tagline: 'Yer AI First Mate!', }, user: { name: 'Alex', picture: 'https://nlux.ai/images/demos/persona-user.jpeg' } }} layoutOptions={{ height: 320, maxWidth: 600 }} /> ); };
You can edit the live code of the example above to switch the dataTransferMode
to stream
and notice the difference.