跳到主要內容
Open In ColabOpen on GitHub

遷移出 ConversationBufferMemory 或 ConversationStringBufferMemory

ConversationBufferMemoryConversationStringBufferMemory 用於追蹤人與 AI 助理之間的對話,而無需任何額外處理。

注意

ConversationStringBufferMemory 等同於 ConversationBufferMemory,但目標是非聊天模型的 LLM。

使用現有現代基礎元件處理對話歷史記錄的方法如下:

  1. 使用 LangGraph 持久性 以及對訊息歷史記錄的適當處理
  2. 使用 LCEL 和 RunnableWithMessageHistory,並結合對訊息歷史記錄的適當處理。

大多數使用者會發現 LangGraph 持久性 比同等的 LCEL 更容易使用和配置,尤其是在更複雜的使用案例中。

設定

%%capture --no-stderr
%pip install --upgrade --quiet langchain-openai langchain
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

與 LLMChain / ConversationChain 的用法

本節說明如何遷移出與 LLMChainConversationChain 一起使用的 ConversationBufferMemoryConversationStringBufferMemory

舊版

以下是 ConversationBufferMemoryLLMChain 或等效的 ConversationChain 一起使用的範例。

詳細資訊
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate(
[
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

legacy_chain = LLMChain(
llm=ChatOpenAI(),
prompt=prompt,
memory=memory,
)

legacy_result = legacy_chain.invoke({"text": "my name is bob"})
print(legacy_result)

legacy_result = legacy_chain.invoke({"text": "what was my name"})
{'text': 'Hello Bob! How can I assist you today?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}
legacy_result["text"]
'Your name is Bob. How can I assist you today, Bob?'
注意

請注意,單一記憶體物件中不支援分隔對話線程

LangGraph

以下範例說明如何使用 LangGraph 實作具有 ConversationBufferMemoryConversationChainLLMChain

此範例假設您已大致熟悉 LangGraph。如果您不熟悉,請參閱 LangGraph 快速入門指南 以取得更多詳細資訊。

LangGraph 提供了許多額外功能(例如,時間旅行和中斷),並且適用於其他更複雜(且更實際)的架構。

詳細資訊
import uuid

from IPython.display import Image, display
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, MessagesState, StateGraph

# Define a new graph
workflow = StateGraph(state_schema=MessagesState)

# Define a chat model
model = ChatOpenAI()


# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}


# Define the two nodes we will cycle between
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)


# Adding memory is straight forward in langgraph!
memory = MemorySaver()

app = workflow.compile(
checkpointer=memory
)


# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}


input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Here, let's confirm that the AI remembers our name!
input_message = HumanMessage(content="what was my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! How can I assist you today?
================================ Human Message =================================

what was my name?
================================== Ai Message ==================================

Your name is Bob. How can I help you today, Bob?

LCEL RunnableWithMessageHistory

或者,如果您有簡單的鏈,您可以將鏈的聊天模型包裝在 RunnableWithMessageHistory 中。

請參閱以下遷移指南以取得更多資訊。

與預先建立的代理的用法

此範例示範如何將 Agent Executor 與使用 create_tool_calling_agent 函數建構的預先建立代理一起使用。

如果您正在使用 舊版 LangChain 預先建立的代理 之一,您應該能夠用新的 langgraph 預先建立的代理 替換該程式碼,後者利用聊天模型的原生工具調用功能,並且可能開箱即用效果更好。

舊版用法

詳細資訊
from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.memory import ConversationBufferMemory
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

model = ChatOpenAI(temperature=0)


@tool
def get_user_age(name: str) -> str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if "bob" in name.lower():
return "42 years old"
return "41 years old"


tools = [get_user_age]

prompt = ChatPromptTemplate.from_messages(
[
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)

# Construct the Tools agent
agent = create_tool_calling_agent(model, tools, prompt)
# Instantiate memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Create an agent
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory, # Pass the memory to the executor
)

# Verify that the agent can use tools
print(agent_executor.invoke({"input": "hi! my name is bob what is my age?"}))
print()
# Verify that the agent has access to conversation history.
# The agent should be able to answer that the user's name is bob.
print(agent_executor.invoke({"input": "do you remember my name?"}))
{'input': 'hi! my name is bob what is my age?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={})], 'output': 'Bob, you are 42 years old.'}

{'input': 'do you remember my name?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={}), HumanMessage(content='do you remember my name?', additional_kwargs={}, response_metadata={}), AIMessage(content='Yes, your name is Bob.', additional_kwargs={}, response_metadata={})], 'output': 'Yes, your name is Bob.'}

LangGraph

您可以依照標準 LangChain 教學課程 建立代理,深入了解其運作方式。

此範例在此明確顯示,以便使用者更輕鬆地比較舊版實作與對應的 langgraph 實作。

此範例示範如何在 langgraph 中為 預先建立的 react 代理 新增記憶體。

如需更多詳細資訊,請參閱 langgraph 中的 如何為預先建立的 ReAct 代理新增記憶體 指南。

詳細資訊
import uuid

from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent


@tool
def get_user_age(name: str) -> str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if "bob" in name.lower():
return "42 years old"
return "41 years old"


memory = MemorySaver()
model = ChatOpenAI()
app = create_react_agent(
model,
tools=[get_user_age],
checkpointer=memory,
)

# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}

# Tell the AI that our name is Bob, and ask it to use a tool to confirm
# that it's capable of working like an agent.
input_message = HumanMessage(content="hi! I'm bob. What is my age?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Confirm that the chat bot has access to previous conversation
# and can respond to the user saying that the user's name is Bob.
input_message = HumanMessage(content="do you remember my name?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob. What is my age?
================================== Ai Message ==================================
Tool Calls:
get_user_age (call_oEDwEbIDNdokwqhAV6Azn47c)
Call ID: call_oEDwEbIDNdokwqhAV6Azn47c
Args:
name: bob
================================= Tool Message =================================
Name: get_user_age

42 years old
================================== Ai Message ==================================

Bob, you are 42 years old! If you need any more assistance or information, feel free to ask.
================================ Human Message =================================

do you remember my name?
================================== Ai Message ==================================

Yes, your name is Bob. If you have any other questions or need assistance, feel free to ask!

如果我們使用不同的線程 ID,它將開始新的對話,並且機器人將不知道我們的名字!

config = {"configurable": {"thread_id": "123456789"}}

input_message = HumanMessage(content="hi! do you remember my name?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! do you remember my name?
================================== Ai Message ==================================

Hello! Yes, I remember your name. It's great to see you again! How can I assist you today?

後續步驟

探索 LangGraph 的持久性

使用簡單的 LCEL 新增持久性(對於更複雜的使用案例,建議使用 langgraph)

使用訊息歷史記錄


此頁面是否實用?