跳到主要內容
Open In ColabOpen on GitHub

從 LLMRouterChain 遷移

LLMRouterChain 會將輸入查詢路由到多個目的地之一——也就是說,給定一個輸入查詢,它會使用 LLM 從目的地鏈列表中選擇一個,並將其輸入傳遞到選定的鏈。

LLMRouterChain 不支援常見的 聊天模型 功能,例如訊息角色和 工具調用。在底層,LLMRouterChain 通過指示 LLM 生成 JSON 格式的文字,並解析出預期的目的地來路由查詢。

考慮一個來自 MultiPromptChain 的範例,它使用了 LLMRouterChain。以下是(範例)預設提示

from langchain.chains.router.multi_prompt import MULTI_PROMPT_ROUTER_TEMPLATE

destinations = """
animals: prompt for animal expert
vegetables: prompt for a vegetable expert
"""

router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations)

print(router_template.replace("`", "'")) # for rendering purposes
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
'''json
{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}
'''

REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >>

animals: prompt for animal expert
vegetables: prompt for a vegetable expert


<< INPUT >>
{input}

<< OUTPUT (must include '''json at the start of the response) >>
<< OUTPUT (must end with ''') >>

大多數行為都是通過單一自然語言提示來確定的。支援 工具調用 功能的聊天模型為此任務帶來了許多優勢

  • 支援聊天提示範本,包括具有 system 和其他角色的訊息;
  • 工具調用模型經過微調,可以生成結構化輸出;
  • 支援 runnable 方法,例如串流和非同步操作。

現在讓我們並排查看 LLMRouterChain 和使用工具調用的 LCEL 實作。請注意,對於本指南,我們將使用 langchain-openai >= 0.1.20

%pip install -qU langchain-core langchain-openai
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

舊版

詳細資訊
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

router_prompt = PromptTemplate(
# Note: here we use the prompt template from above. Generally this would need
# to be customized.
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)

chain = LLMRouterChain.from_llm(llm, router_prompt)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetables

LCEL

詳細資訊
from operator import itemgetter
from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict

llm = ChatOpenAI(model="gpt-4o-mini")

route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)


# Define schema for output:
class RouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal", "vegetable"]


# Instead of writing formatting instructions into the prompt, we
# leverage .with_structured_output to coerce the output into a simple
# schema.
chain = route_prompt | llm.with_structured_output(RouteQuery)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetable

後續步驟

請參閱 本教學課程,以獲取有關使用提示範本、LLM 和輸出解析器進行建構的更多詳細資訊。

查看 LCEL 概念文件,以獲取更多背景資訊。


此頁面是否有幫助?