跳到主要內容

LlamaEdge

LlamaEdge 讓您可以在本地和透過聊天服務與 GGUF 格式的 LLM 聊天。

  • LlamaEdgeChatService 為開發人員提供 OpenAI API 相容服務,透過 HTTP 請求與 LLM 聊天。

  • LlamaEdgeChatLocal 讓開發人員可以在本地與 LLM 聊天(即將推出)。

LlamaEdgeChatServiceLlamaEdgeChatLocal 都在由 WasmEdge Runtime 驅動的基礎架構上運行,它為 LLM 推理任務提供了輕量級且可攜式的 WebAssembly 容器環境。

透過 API 服務聊天

LlamaEdgeChatServicellama-api-server 上運作。依照 llama-api-server 快速入門 中的步驟,您可以託管自己的 API 服務,以便您可以隨時隨地在任何裝置上與您喜歡的任何模型聊天,只要網路連線可用即可。

from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
from langchain_core.messages import HumanMessage, SystemMessage

以非串流模式與 LLM 聊天

# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"

# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url)

# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of France?")
messages = [system_message, user_message]

# chat with wasm-chat service
response = chat.invoke(messages)

print(f"[Bot] {response.content}")
[Bot] Hello! The capital of France is Paris.

以串流模式與 LLM 聊天

# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"

# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url, streaming=True)

# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of Norway?")
messages = [
system_message,
user_message,
]

output = ""
for chunk in chat.stream(messages):
# print(chunk.content, end="", flush=True)
output += chunk.content

print(f"[Bot] {output}")
[Bot]   Hello! I'm happy to help you with your question. The capital of Norway is Oslo.

此頁面是否對您有幫助?