跳到主要內容
Open In ColabOpen on GitHub

如何在非同步環境中使用回呼

先決條件

本指南假設您熟悉以下概念

如果您計劃使用非同步 API,建議使用並擴展 AsyncCallbackHandler,以避免封鎖事件。

警告

如果您在使用非同步方法執行 LLM / 鏈 / 工具 / 代理程式時使用同步 CallbackHandler,它仍然可以運作。但是,在底層,它將使用 run_in_executor 呼叫,如果您的 CallbackHandler 不是線程安全的,則可能會導致問題。

危險

如果您使用的是 python<=3.10,您需要記住在從 RunnableLambdaRunnableGenerator@tool 內部調用其他 runnable 時傳播 configcallbacks。如果您不這樣做,回呼將不會傳播到正在調用的子 runnable。

import asyncio
from typing import Any, Dict, List

from langchain_anthropic import ChatAnthropic
from langchain_core.callbacks import AsyncCallbackHandler, BaseCallbackHandler
from langchain_core.messages import HumanMessage
from langchain_core.outputs import LLMResult


class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")


class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""

async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")

async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")


# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatAnthropic(
model="claude-3-sonnet-20240229",
max_tokens=25,
streaming=True,
callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],
)

await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
zzzz....
Hi! I just woke up. Your llm is starting
Sync handler being called in a `thread_pool_executor`: token: Here
Sync handler being called in a `thread_pool_executor`: token: 's
Sync handler being called in a `thread_pool_executor`: token: a
Sync handler being called in a `thread_pool_executor`: token: little
Sync handler being called in a `thread_pool_executor`: token: joke
Sync handler being called in a `thread_pool_executor`: token: for
Sync handler being called in a `thread_pool_executor`: token: you
Sync handler being called in a `thread_pool_executor`: token: :
Sync handler being called in a `thread_pool_executor`: token:

Why
Sync handler being called in a `thread_pool_executor`: token: can
Sync handler being called in a `thread_pool_executor`: token: 't
Sync handler being called in a `thread_pool_executor`: token: a
Sync handler being called in a `thread_pool_executor`: token: bicycle
Sync handler being called in a `thread_pool_executor`: token: stan
Sync handler being called in a `thread_pool_executor`: token: d up
Sync handler being called in a `thread_pool_executor`: token: by
Sync handler being called in a `thread_pool_executor`: token: itself
Sync handler being called in a `thread_pool_executor`: token: ?
Sync handler being called in a `thread_pool_executor`: token: Because
Sync handler being called in a `thread_pool_executor`: token: it
Sync handler being called in a `thread_pool_executor`: token: 's
Sync handler being called in a `thread_pool_executor`: token: two
Sync handler being called in a `thread_pool_executor`: token: -
Sync handler being called in a `thread_pool_executor`: token: tire
zzzz....
Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", message=AIMessage(content="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", id='run-8afc89e8-02c0-4522-8480-d96977240bd4-0'))]], llm_output={}, run=[RunInfo(run_id=UUID('8afc89e8-02c0-4522-8480-d96977240bd4'))])

後續步驟

您現在已學會如何建立自己的自訂回呼處理器。

接下來,查看本節中的其他操作指南,例如如何將回呼附加到 runnable


此頁面是否對您有幫助?