如何使用少量樣本提示與工具呼叫
對於更複雜的工具使用,添加 少量樣本範例 到提示中非常有用。我們可以透過添加帶有 ToolCall
和對應 ToolMessage
的 AIMessage
到我們的提示中來做到這一點。
首先,讓我們定義我們的工具和模型。
from langchain_core.tools import tool
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b
tools = [add, multiply]
API 參考:tool
import os
from getpass import getpass
from langchain_openai import ChatOpenAI
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools(tools)
API 參考:ChatOpenAI
讓我們執行我們的模型,在那裡我們可以注意到,即使有一些特殊指示,我們的模型也可能會被運算順序搞糊塗。
llm_with_tools.invoke(
"Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations"
).tool_calls
[{'name': 'Multiply',
'args': {'a': 119, 'b': 8},
'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'},
{'name': 'Add',
'args': {'a': 952, 'b': -20},
'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}]
模型現在不應該嘗試添加任何東西,因為它在技術上無法知道 119 * 8 的結果。
透過添加帶有一些範例的提示,我們可以糾正這種行為
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
examples = [
HumanMessage(
"What's the product of 317253 and 128472 plus four", name="example_user"
),
AIMessage(
"",
name="example_assistant",
tool_calls=[
{"name": "Multiply", "args": {"x": 317253, "y": 128472}, "id": "1"}
],
),
ToolMessage("16505054784", tool_call_id="1"),
AIMessage(
"",
name="example_assistant",
tool_calls=[{"name": "Add", "args": {"x": 16505054784, "y": 4}, "id": "2"}],
),
ToolMessage("16505054788", tool_call_id="2"),
AIMessage(
"The product of 317253 and 128472 plus four is 16505054788",
name="example_assistant",
),
]
system = """You are bad at math but are an expert at using a calculator.
Use past tool usage as an example of how to correctly use the tools."""
few_shot_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
*examples,
("human", "{query}"),
]
)
chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_tools
chain.invoke("Whats 119 times 8 minus 20").tool_calls
[{'name': 'Multiply',
'args': {'a': 119, 'b': 8},
'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}]
這次我們得到了正確的輸出。
這是 LangSmith 追蹤 的樣子。