跳到主要內容
Open In ColabOpen on GitHub

如何從訊息物件中解析文字

先決條件

LangChain 訊息物件支援多種格式的內容,包括文字、多模態資料,以及內容區塊字典的列表。

聊天模型回應內容的格式可能取決於提供者。例如,Anthropic 的聊天模型將為典型的字串輸入返回字串內容

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")

response = llm.invoke("Hello")
response.content
API 參考:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'

但是,當產生工具呼叫時,回應內容會結構化為傳達模型推理過程的內容區塊

from langchain_core.tools import tool


@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""

return "Sunny."


llm_with_tools = llm.bind_tools([get_weather])

response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API 參考:tool
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]

為了自動從訊息物件中解析文字,而無需考慮底層內容的格式,我們可以使用 StrOutputParser。我們可以將其與聊天模型組合如下

from langchain_core.output_parsers import StrOutputParser

chain = llm_with_tools | StrOutputParser()
API 參考:StrOutputParser

StrOutputParser 簡化了從訊息物件中提取文字的過程

response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.

這在串流上下文中特別有用

for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||

請參閱 API 參考 以取得更多資訊。


此頁面是否有幫助?