跳到主要內容
Open In ColabOpen on GitHub

從 MultiPromptChain 遷移

MultiPromptChain 將輸入查詢路由到多個 LLMChain 之一——也就是說,給定一個輸入查詢,它使用 LLM 從提示列表中選擇,將查詢格式化為提示,並生成回應。

MultiPromptChain 不支援常見的聊天模型功能,例如訊息角色和工具調用

LangGraph 實作為此問題帶來許多優勢

  • 支援聊天提示範本,包括帶有 system 和其他角色的訊息;
  • 支援在路由步驟中使用工具調用;
  • 支援串流個別步驟和輸出 Token。

現在讓我們並排查看它們。請注意,在本指南中,我們將使用 langchain-openai >= 0.1.20

%pip install -qU langchain-core langchain-openai
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

舊版

詳細資訊
from langchain.chains.router.multi_prompt import MultiPromptChain
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

prompt_1_template = """
You are an expert on animals. Please answer the below query:

{input}
"""

prompt_2_template = """
You are an expert on vegetables. Please answer the below query:

{input}
"""

prompt_infos = [
{
"name": "animals",
"description": "prompt for an animal expert",
"prompt_template": prompt_1_template,
},
{
"name": "vegetables",
"description": "prompt for a vegetable expert",
"prompt_template": prompt_2_template,
},
]

chain = MultiPromptChain.from_prompts(llm, prompt_infos)
chain.invoke({"input": "What color are carrots?"})
{'input': 'What color are carrots?',
'text': 'Carrots are most commonly orange, but they can also be found in a variety of other colors including purple, yellow, white, and red. The orange variety is the most popular and widely recognized.'}

LangSmith 追蹤中,我們可以查看此過程的兩個步驟,包括路由查詢和最終選定提示的提示。

LangGraph

詳細資訊
pip install -qU langgraph
from operator import itemgetter
from typing import Literal

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from typing_extensions import TypedDict

llm = ChatOpenAI(model="gpt-4o-mini")

# Define the prompts we will route to
prompt_1 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on animals."),
("human", "{input}"),
]
)
prompt_2 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on vegetables."),
("human", "{input}"),
]
)

# Construct the chains we will route to. These format the input query
# into the respective prompt, run it through a chat model, and cast
# the result to a string.
chain_1 = prompt_1 | llm | StrOutputParser()
chain_2 = prompt_2 | llm | StrOutputParser()


# Next: define the chain that selects which branch to route to.
# Here we will take advantage of tool-calling features to force
# the output to select one of two desired branches.
route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)


# Define schema for output:
class RouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal", "vegetable"]


route_chain = route_prompt | llm.with_structured_output(RouteQuery)


# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
class State(TypedDict):
query: str
destination: RouteQuery
answer: str


# We define functions for each node, including routing the query:
async def route_query(state: State, config: RunnableConfig):
destination = await route_chain.ainvoke(state["query"], config)
return {"destination": destination}


# And one node for each prompt
async def prompt_1(state: State, config: RunnableConfig):
return {"answer": await chain_1.ainvoke(state["query"], config)}


async def prompt_2(state: State, config: RunnableConfig):
return {"answer": await chain_2.ainvoke(state["query"], config)}


# We then define logic that selects the prompt based on the classification
def select_node(state: State) -> Literal["prompt_1", "prompt_2"]:
if state["destination"] == "animal":
return "prompt_1"
else:
return "prompt_2"


# Finally, assemble the multi-prompt chain. This is a sequence of two steps:
# 1) Select "animal" or "vegetable" via the route_chain, and collect the answer
# alongside the input query.
# 2) Route the input query to chain_1 or chain_2, based on the
# selection.
graph = StateGraph(State)
graph.add_node("route_query", route_query)
graph.add_node("prompt_1", prompt_1)
graph.add_node("prompt_2", prompt_2)

graph.add_edge(START, "route_query")
graph.add_conditional_edges("route_query", select_node)
graph.add_edge("prompt_1", END)
graph.add_edge("prompt_2", END)
app = graph.compile()
from IPython.display import Image

Image(app.get_graph().draw_mermaid_png())

我們可以如下調用鏈

state = await app.ainvoke({"query": "what color are carrots"})
print(state["destination"])
print(state["answer"])
{'destination': 'vegetable'}
Carrots are most commonly orange, but they can also come in a variety of other colors, including purple, red, yellow, and white. The different colors often indicate varying flavors and nutritional profiles. For example, purple carrots contain anthocyanins, while orange carrots are rich in beta-carotene, which is converted to vitamin A in the body.

LangSmith 追蹤中,我們可以查看路由查詢的工具調用以及選取用於生成答案的提示。

概述:

  • 在底層,MultiPromptChain 通過指示 LLM 生成 JSON 格式的文本來路由查詢,並解析出預期的目的地。它採用字串提示範本的註冊表作為輸入。
  • LangGraph 實作(如上通過較低層級的基元實作)使用工具調用來路由到任意鏈。在此範例中,鏈包括聊天模型範本和聊天模型。

下一步:

請參閱本教學,以了解有關使用提示範本、LLM 和輸出解析器建構的更多詳細資訊。

請查看LangGraph 文件,以了解有關使用 LangGraph 建構的詳細資訊。


此頁面是否有幫助?