跳到主要內容
Open In ColabOpen on GitHub

ChatAnthropic

本筆記本快速概述 Anthropic 聊天模型的入門方法。如需所有 ChatAnthropic 功能和設定的詳細文件,請參閱 API 參考

Anthropic 有多種聊天模型。您可以在 Anthropic 文件中找到關於其最新模型及其成本、上下文窗口和支援的輸入類型的資訊。

AWS Bedrock 和 Google VertexAI

請注意,某些 Anthropic 模型也可以透過 AWS Bedrock 和 Google VertexAI 存取。請參閱 ChatBedrockChatVertexAI 整合,以透過這些服務使用 Anthropic 模型。

概述

整合細節

類別套件本地可序列化JS 支援套件下載次數最新套件
ChatAnthropiclangchain-anthropicbetaPyPI - DownloadsPyPI - Version

模型功能

工具呼叫結構化輸出JSON 模式圖像輸入音訊輸入視訊輸入Token 級別串流原生非同步Token 使用量Logprobs

設定

若要存取 Anthropic 模型,您需要建立一個 Anthropic 帳戶、取得 API 金鑰,並安裝 langchain-anthropic 整合套件。

憑證

前往 https://console.anthropic.com/ 註冊 Anthropic 並產生 API 金鑰。完成後,設定 ANTHROPIC_API_KEY 環境變數

import getpass
import os

if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")

如果您想要自動追蹤您的模型呼叫,您也可以取消註解下方內容來設定您的 LangSmith API 金鑰

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

安裝

LangChain Anthropic 整合位於 langchain-anthropic 套件中

%pip install -qU langchain-anthropic
本指南需要 langchain-anthropic>=0.3.10

實例化

現在我們可以實例化我們的模型物件並產生聊天完成

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
temperature=0,
max_tokens=1024,
timeout=None,
max_retries=2,
# other params...
)
API 參考:ChatAnthropic

調用

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})
print(ai_msg.content)
J'adore la programmation.

鏈接

我們可以像這樣使用提示範本鏈接我們的模型

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API 參考:ChatPromptTemplate
AIMessage(content="Here's the German translation:\n\nIch liebe Programmieren.", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})

內容區塊

來自單一 Anthropic AI 訊息的內容可以是單一字串或內容區塊列表。例如,當 Anthropic 模型調用工具時,工具調用是訊息內容的一部分(以及在標準化的 AIMessage.tool_calls 中公開)

from pydantic import BaseModel, Field


class GetWeather(BaseModel):
"""Get the current weather in a given location"""

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke("Which city is hotter today: LA or NY?")
ai_msg.content
[{'text': "To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.",
'type': 'text'},
{'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',
'input': {'location': 'Los Angeles, CA'},
'name': 'GetWeather',
'type': 'tool_use'},
{'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',
'input': {'location': 'New York, NY'},
'name': 'GetWeather',
'type': 'tool_use'}]
ai_msg.tool_calls
[{'name': 'GetWeather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
{'name': 'GetWeather',
'args': {'location': 'New York, NY'},
'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]

延伸思考

Claude 3.7 Sonnet 支援 延伸思考功能,它將輸出導致其最終答案的逐步推理過程。

若要使用它,請在初始化 ChatAnthropic 時指定 thinking 參數。它也可以在調用期間作為 kwarg 傳遞。

您需要指定 token 預算才能使用此功能。請參閱以下使用範例

import json

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-3-7-sonnet-latest",
max_tokens=5000,
thinking={"type": "enabled", "budget_tokens": 2000},
)

response = llm.invoke("What is the cube root of 50.653?")
print(json.dumps(response.content, indent=2))
API 參考:ChatAnthropic
[
{
"signature": "ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==",
"thinking": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.",
"type": "thinking"
},
{
"text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number.",
"type": "text"
}
]

提示快取

Anthropic 支援 快取 提示的元素,包括訊息、工具定義、工具結果、圖像和文件。這允許您重複使用大型文件、指示、少量樣本文件和其他數據,以減少延遲和成本。

若要在提示的元素上啟用快取,請使用 cache_control 鍵標記其相關的內容區塊。請參閱以下範例

訊息

import requests
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a technology expert.",
},
{
"type": "text",
"text": f"{readme}",
"cache_control": {"type": "ephemeral"},
},
],
},
{
"role": "user",
"content": "What's LangChain, according to its README?",
},
]

response_1 = llm.invoke(messages)
response_2 = llm.invoke(messages)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
API 參考:ChatAnthropic
First invocation:
{'cache_read': 0, 'cache_creation': 1458}

Second:
{'cache_read': 1458, 'cache_creation': 0}

工具

from langchain_anthropic import convert_to_anthropic_tool
from langchain_core.tools import tool

# For demonstration purposes, we artificially expand the
# tool description.
description = (
f"Get the weather at a location. By the way, check out this readme: {readme}"
)


@tool(description=description)
def get_weather(location: str) -> str:
return "It's sunny."


# Enable caching on the tool
weather_tool = convert_to_anthropic_tool(get_weather)
weather_tool["cache_control"] = {"type": "ephemeral"}

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")
llm_with_tools = llm.bind_tools([weather_tool])
query = "What's the weather in San Francisco?"

response_1 = llm_with_tools.invoke(query)
response_2 = llm_with_tools.invoke(query)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
First invocation:
{'cache_read': 0, 'cache_creation': 1809}

Second:
{'cache_read': 1809, 'cache_creation': 0}

對話應用程式中的增量快取

提示快取可用於 多輪對話,以維護先前訊息的上下文,而無需冗餘處理。

我們可以透過使用 cache_control 標記最後一則訊息來啟用增量快取。Claude 將自動為後續訊息使用最長的前期快取前綴。

下面,我們實作一個整合此功能的簡單聊天機器人。我們遵循 LangChain 聊天機器人教學,但新增一個自訂 reducer,它會自動使用 cache_control 標記每則使用者訊息中的最後一個內容區塊。請參閱以下內容

import requests
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph, add_messages
from typing_extensions import Annotated, TypedDict

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text


def messages_reducer(left: list, right: list) -> list:
# Update last user message
for i in range(len(right) - 1, -1, -1):
if right[i].type == "human":
right[i].content[-1]["cache_control"] = {"type": "ephemeral"}
break

return add_messages(left, right)


class State(TypedDict):
messages: Annotated[list, messages_reducer]


workflow = StateGraph(state_schema=State)


# Define the function that calls the model
def call_model(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}


# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)

# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "abc123"}}

query = "Hi! I'm Bob."

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
API 參考:HumanMessage
================================== Ai Message ==================================

Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?

{'cache_read': 0, 'cache_creation': 0}
query = f"Check out this readme: {readme}"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
================================== Ai Message ==================================

I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:

LangChain is:
- A framework for developing LLM-powered applications
- Helps chain together components and integrations to simplify AI application development
- Provides a standard interface for models, embeddings, vector stores, etc.

Key features/benefits:
- Real-time data augmentation (connect LLMs to diverse data sources)
- Model interoperability (swap models easily as needed)
- Large ecosystem of integrations

The LangChain ecosystem includes:
- LangSmith - For evaluations and observability
- LangGraph - For building complex agents with customizable architecture
- LangGraph Platform - For deployment and scaling of agents

The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.

Is there anything specific about LangChain you'd like to know more about, Bob?

{'cache_read': 0, 'cache_creation': 1498}
query = "What was my name again?"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
================================== Ai Message ==================================

Your name is Bob. You introduced yourself at the beginning of our conversation.

{'cache_read': 1498, 'cache_creation': 269}

LangSmith 追蹤中,切換「原始輸出」將顯示確切發送到聊天模型的訊息,包括 cache_control 鍵。

Token 效率工具使用

Anthropic 支援 (beta) token 效率工具使用功能。若要使用它,請在實例化模型時指定相關的 beta-headers。

from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

llm = ChatAnthropic(
model="claude-3-7-sonnet-20250219",
temperature=0,
model_kwargs={
"extra_headers": {"anthropic-beta": "token-efficient-tools-2025-02-19"}
},
)


@tool
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."


llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)
print(f'\nTotal tokens: {response.usage_metadata["total_tokens"]}')
API 參考:ChatAnthropic | tool
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]

Total tokens: 408

引文

Anthropic 支援 引文功能,讓 Claude 可以根據使用者提供的來源文件將上下文附加到其答案。當查詢中包含具有 "citations": {"enabled": True}文件內容區塊時,Claude 可能會在其回應中產生引文。

簡單範例

在本範例中,我們傳遞一個 純文字文件。在背景中,Claude 自動將輸入文字分塊成句子,這些句子在產生引文時使用。

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")

messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = llm.invoke(messages)
response.content
API 參考:ChatAnthropic
[{'text': 'Based on the document, ', 'type': 'text'},
{'text': 'the grass is green',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The grass is green. ',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 0,
'end_char_index': 20}]},
{'text': ', and ', 'type': 'text'},
{'text': 'the sky is blue',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The sky is blue.',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 20,
'end_char_index': 36}]},
{'text': '.', 'type': 'text'}]

與文字分割器一起使用

Anthropic 也允許您使用 自訂文件 類型指定自己的分割。LangChain 文字分割器 可用於為此目的產生有意義的分割。請參閱以下範例,我們在其中分割 LangChain README(markdown 文件)並將其作為上下文傳遞給 Claude

import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter


def format_to_anthropic_documents(documents: list[str]):
return {
"type": "document",
"source": {
"type": "content",
"content": [{"type": "text", "text": document} for document in documents],
},
"citations": {"enabled": True},
}


# Pull readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

# Split into chunks
splitter = MarkdownTextSplitter(
chunk_overlap=0,
chunk_size=50,
)
documents = splitter.split_text(readme)

# Construct message
message = {
"role": "user",
"content": [
format_to_anthropic_documents(documents),
{"type": "text", "text": "Give me a link to LangChain's tutorials."},
],
}

# Query LLM
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke([message])

response.content
[{'text': "You can find LangChain's tutorials at https://langchain-python.dev.org.tw/docs/tutorials/\n\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.",
'type': 'text',
'citations': [{'type': 'content_block_location',
'cited_text': "[Tutorials](https://langchain-python.dev.org.tw/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.",
'document_index': 0,
'document_title': None,
'start_block_index': 243,
'end_block_index': 248}]}]

內建工具

Anthropic 支援各種 內建工具,這些工具可以以 通常的方式 綁定到模型。Claude 將產生符合其工具內部結構描述的工具呼叫

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text())
response.tool_calls
API 參考:ChatAnthropic
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]

API 參考

如需所有 ChatAnthropic 功能和設定的詳細文件,請參閱 API 參考:https://langchain-python.dev.org.tw/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html


此頁面是否對您有幫助?