跳到主要內容

ChatFriendli

Friendli 透過可擴展、高效的部署選項,專為高需求 AI 工作負載量身打造,提升 AI 應用程式效能並最佳化成本節省。

本教學課程引導您整合 ChatFriendli 以使用 LangChain 於聊天應用程式。ChatFriendli 提供彈性的方法來產生對話式 AI 回應,同時支援同步和非同步呼叫。

設定

確保已安裝 langchain_communityfriendli-client

pip install -U langchain-community friendli-client.

登入 Friendli Suite 以建立個人存取權杖,並將其設定為 FRIENDLI_TOKEN 環境變數。

import getpass
import os

if "FRIENDLI_TOKEN" not in os.environ:
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")

您可以初始化 Friendli 聊天模型並選擇您想要使用的模型。預設模型為 mixtral-8x7b-instruct-v0-1。您可以在 docs.friendli.ai 查看可用的模型。

from langchain_community.chat_models.friendli import ChatFriendli

chat = ChatFriendli(model="llama-2-13b-chat", max_tokens=100, temperature=0)
API 參考:ChatFriendli

用法

FrienliChat 支援 ChatModel 的所有方法,包括非同步 API。

您也可以使用 invokebatchgeneratestream 的功能。

from langchain_core.messages.human import HumanMessage
from langchain_core.messages.system import SystemMessage

system_message = SystemMessage(content="Answer questions as short as you can.")
human_message = HumanMessage(content="Tell me a joke.")
messages = [system_message, human_message]

chat.invoke(messages)
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
chat.batch([messages, messages])
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"),
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
chat.generate([messages, messages])
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('a0c2d733-6971-4ae7-beea-653856f4e57c')), RunInfo(run_id=UUID('f3d35e44-ac9a-459a-9e4b-b8e3a73a91e1'))])
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
 Knock, knock!
Who's there?
Cows go.
Cows go who?
MOO!

您也可以使用非同步 API 的所有功能:ainvokeabatchagenerateastream

await chat.ainvoke(messages)
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
await chat.abatch([messages, messages])
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"),
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
await chat.agenerate([messages, messages])
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('f2255321-2d8e-41cc-adbd-3f4facec7573')), RunInfo(run_id=UUID('fcc297d0-6ca9-48cb-9d86-e6f78cade8ee'))])
async for chunk in chat.astream(messages):
print(chunk.content, end="", flush=True)
 Knock, knock!
Who's there?
Cows go.
Cows go who?
MOO!

此頁面是否有幫助?