跳到主要內容
Open In ColabOpen on GitHub

iMessage

本筆記本示範如何使用 iMessage 聊天載入器。此類別有助於將 iMessage 對話轉換為 LangChain 聊天訊息。

在 MacOS 上,iMessage 將對話儲存在 ~/Library/Messages/chat.db 的 sqlite 資料庫中 (至少對於 macOS Ventura 13.4 是如此)。IMessageChatLoader 從此資料庫檔案載入。

  1. 建立 IMessageChatLoader,並使用指向您要處理的 chat.db 資料庫的檔案路徑。
  2. 呼叫 loader.load() (或 loader.lazy_load()) 執行轉換。選擇性地使用 merge_chat_runs 以合併來自同一發送者的連續訊息,以及/或 map_ai_messages 將來自指定發送者的訊息轉換為 "AIMessage" 類別。

1. 存取聊天 DB

您的終端機可能被拒絕存取 ~/Library/Messages。若要使用此類別,您可以將 DB 複製到可存取的目錄 (例如 Documents) 並從該處載入。或者 (不建議),您可以在「系統設定」>「安全性與隱私權」>「完整磁碟取用權」中授予終端機模擬器完整磁碟取用權。

我們建立了一個範例資料庫,您可以在這個連結的雲端硬碟檔案中使用。

# This uses some example data
import requests


def download_drive_file(url: str, output_path: str = "chat.db") -> None:
file_id = url.split("/")[-2]
download_url = f"https://drive.google.com/uc?export=download&id={file_id}"

response = requests.get(download_url)
if response.status_code != 200:
print("Failed to download the file.")
return

with open(output_path, "wb") as file:
file.write(response.content)
print(f"File {output_path} downloaded.")


url = (
"https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing"
)

# Download file to chat.db
download_drive_file(url)
File chat.db downloaded.

2. 建立聊天載入器

向載入器提供 zip 目錄的檔案路徑。您可以選擇性地指定對應到 AI 訊息的使用者 ID,以及設定是否合併訊息執行。

from langchain_community.chat_loaders.imessage import IMessageChatLoader
API 參考:IMessageChatLoader
loader = IMessageChatLoader(
path="./chat.db",
)

3. 載入訊息

load() (或 lazy_load) 方法傳回 "ChatSessions" 的清單,目前僅包含每個載入對話的訊息清單。所有訊息都會對應到 "HumanMessage" 物件以開始。

您可以選擇性地合併訊息「執行」(來自同一發送者的連續訊息),並選擇一個發送者來代表「AI」。微調後的 LLM 將學習產生這些 AI 訊息。

from typing import List

from langchain_community.chat_loaders.utils import (
map_ai_messages,
merge_chat_runs,
)
from langchain_core.chat_sessions import ChatSession

raw_messages = loader.lazy_load()
# Merge consecutive messages from the same sender into a single message
merged_messages = merge_chat_runs(raw_messages)
# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?
chat_sessions: List[ChatSession] = list(
map_ai_messages(merged_messages, sender="Tortoise")
)
# Now all of the Tortoise's messages will take the AI message class
# which maps to the 'assistant' role in OpenAI's training format
chat_sessions[0]["messages"][:3]
[AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False),
HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False),
AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]

3. 準備微調

現在是時候將我們的聊天訊息轉換為 OpenAI 字典。我們可以使用 convert_messages_for_finetuning 工具來執行此操作。

from langchain_community.adapters.openai import convert_messages_for_finetuning
training_data = convert_messages_for_finetuning(chat_sessions)
print(f"Prepared {len(training_data)} dialogues for training")
Prepared 10 dialogues for training

4. 微調模型

現在是時候微調模型。請確保您已安裝 openai 並已適當設定您的 OPENAI_API_KEY

%pip install --upgrade --quiet  langchain-openai
import json
import time
from io import BytesIO

import openai

# We will write the jsonl file in memory
my_file = BytesIO()
for m in training_data:
my_file.write((json.dumps({"messages": m}) + "\n").encode("utf-8"))

my_file.seek(0)
training_file = openai.files.create(file=my_file, purpose="fine-tune")

# OpenAI audits each training file for compliance reasons.
# This make take a few minutes
status = openai.files.retrieve(training_file.id).status
start_time = time.time()
while status != "processed":
print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
time.sleep(5)
status = openai.files.retrieve(training_file.id).status
print(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.")
File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.

檔案準備就緒後,就可以開始訓練工作。

job = openai.fine_tuning.jobs.create(
training_file=training_file.id,
model="gpt-3.5-turbo",
)

在您的模型準備期間,去泡杯茶吧。這可能需要一些時間!

status = openai.fine_tuning.jobs.retrieve(job.id).status
start_time = time.time()
while status != "succeeded":
print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
time.sleep(5)
job = openai.fine_tuning.jobs.retrieve(job.id)
status = job.status
Status=[running]... 524.95s
print(job.fine_tuned_model)
ft:gpt-3.5-turbo-0613:personal::7sKoRdlz

5. 在 LangChain 中使用

您可以直接使用產生的模型 ID 和 ChatOpenAI 模型類別。

from langchain_openai import ChatOpenAI

model = ChatOpenAI(
model=job.fine_tuned_model,
temperature=1,
)
API 參考:ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
("system", "You are speaking to hare."),
("human", "{input}"),
]
)

chain = prompt | model | StrOutputParser()
for tok in chain.stream({"input": "What's the golden thread?"}):
print(tok, end="", flush=True)
A symbol of interconnectedness.

此頁面是否實用?