跳到主要內容
Open In ColabOpen on GitHub

Aim

Aim 使視覺化和偵錯 LangChain 執行變得非常容易。Aim 追蹤 LLM 和工具的輸入和輸出,以及代理的動作。

使用 Aim,您可以輕鬆地偵錯和檢查單個執行

此外,您可以選擇並排比較多個執行

Aim 是完全開源的,在 GitHub 上了解更多關於 Aim 的資訊。

讓我們繼續前進,看看如何啟用和配置 Aim 回調。

使用 Aim 追蹤 LangChain 執行

在本筆記本中,我們將探索三種使用情境。首先,我們將安裝必要的套件並導入某些模組。隨後,我們將配置兩個環境變數,這些變數可以在 Python 腳本中或透過終端機建立。

%pip install --upgrade --quiet  aim
%pip install --upgrade --quiet langchain
%pip install --upgrade --quiet langchain-openai
%pip install --upgrade --quiet google-search-results
import os
from datetime import datetime

from langchain_community.callbacks import AimCallbackHandler
from langchain_core.callbacks import StdOutCallbackHandler
from langchain_openai import OpenAI

我們的範例使用 GPT 模型作為 LLM,而 OpenAI 為此目的提供了 API。您可以從以下連結獲取金鑰:https://platform.openai.com/account/api-keys

我們將使用 SerpApi 從 Google 檢索搜尋結果。要獲取 SerpApi 金鑰,請前往 https://serpapi.com/manage-api-key

os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."

AimCallbackHandler 的事件方法接受 LangChain 模組或代理作為輸入,並記錄至少提示和生成的結果,以及 LangChain 模組的序列化版本,到指定的 Aim 運行。

session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
aim_callback = AimCallbackHandler(
repo=".",
experiment_name="scenario 1: OpenAI LLM",
)

callbacks = [StdOutCallbackHandler(), aim_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)

flush_tracker 函數用於記錄 Aim 上的 LangChain 資產。預設情況下,會話會被重置,而不是完全終止。

情境 1

在第一個情境中,我們將使用 OpenAI LLM。

# scenario 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
aim_callback.flush_tracker(
langchain_asset=llm,
experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",
)

情境 2

情境二涉及跨多個世代的多個子鏈的鏈接。

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
API 參考文檔:LLMChain | PromptTemplate
# scenario 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)

test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
{"title": "the phenomenon behind the remarkable speed of cheetahs"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker(
langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools"
)

情境 3

第三個情境涉及帶有工具的代理。

from langchain.agents import AgentType, initialize_agent, load_tools
API 參考文檔:AgentType | initialize_agent | load_tools
# scenario 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=callbacks,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)


> Entering new AgentExecutor chain...
 I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078

Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.

> Finished chain.

此頁面是否有幫助?