跳至主要內容
Open In ColabOpen on GitHub

IBM watsonx.ai

WatsonxLLM 是 IBM watsonx.ai 基礎模型的封裝器。

此範例示範如何使用 LangChainwatsonx.ai 模型通訊。

概觀

整合詳細資訊

類別套件本機可序列化JS 支援套件下載套件最新版
WatsonxLLMlangchain-ibmPyPI - DownloadsPyPI - Version

設定

若要存取 IBM watsonx.ai 模型,您需要建立 IBM watsonx.ai 帳戶、取得 API 金鑰,並安裝 langchain-ibm 整合套件。

憑證

以下儲存格定義使用 watsonx Foundation Model 推論所需的憑證。

動作: 提供 IBM Cloud 使用者 API 金鑰。如需詳細資訊,請參閱管理使用者 API 金鑰

import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

此外,您可以將其他密碼作為環境變數傳遞。

import os

os.environ["WATSONX_URL"] = "your service instance url"
os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster"
os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster"
os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster"
os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster"

安裝

LangChain IBM 整合位於 langchain-ibm 套件中

!pip install -qU langchain-ibm

例項化

您可能需要針對不同的模型或任務調整模型 parameters。如需詳細資訊,請參閱文件

parameters = {
"decoding_method": "sample",
"max_new_tokens": 100,
"min_new_tokens": 1,
"temperature": 0.5,
"top_k": 50,
"top_p": 1,
}

使用先前設定的參數初始化 WatsonxLLM 類別。

注意:

  • 若要為 API 呼叫提供背景資訊,您必須新增 project_idspace_id。如需詳細資訊,請參閱文件
  • 根據您佈建的服務執行個體的區域,使用 此處描述的 URL 之一。

在本範例中,我們將使用 project_id 和達拉斯 URL。

您需要指定將用於推論的 model_id。您可以在文件中找到所有可用的模型。

from langchain_ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)
API 參考:WatsonxLLM

或者,您可以使用 Cloud Pak for Data 憑證。如需詳細資訊,請參閱文件

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="4.8",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

除了 model_id 之外,您也可以傳遞先前調整模型的 deployment_id。整個模型調整工作流程在使用 TuneExperiment 和 PromptTuner中說明。

watsonx_llm = WatsonxLLM(
deployment_id="PASTE YOUR DEPLOYMENT_ID HERE",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

針對特定需求,您可以選擇將 IBM 的 APIClient 物件傳遞至 WatsonxLLM 類別。

from ibm_watsonx_ai import APIClient

api_client = APIClient(...)

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
watsonx_client=api_client,
)

您也可以將 IBM 的 ModelInference 物件傳遞至 WatsonxLLM 類別。

from ibm_watsonx_ai.foundation_models import ModelInference

model = ModelInference(...)

watsonx_llm = WatsonxLLM(watsonx_model=model)

調用

若要取得完成項目,您可以直接使用字串提示呼叫模型。

# Calling a single prompt

watsonx_llm.invoke("Who is man's best friend?")
"Man's best friend is his dog. Dogs are man's best friend because they are always there for you, they never judge you, and they love you unconditionally. Dogs are also great companions and can help reduce stress levels. "
# Calling multiple prompts

watsonx_llm.generate(
[
"The fastest dog in the world?",
"Describe your chosen dog breed",
]
)
LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound. Greyhounds can run up to 45 mph, which is about the same speed as a Usain Bolt.', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Labrador Retriever is a breed of retriever that was bred for hunting. They are a very smart breed and are very easy to train. They are also very loyal and will make great companions. ', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 82, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None}, run=[RunInfo(run_id=UUID('750b8a0f-8846-456d-93d0-e039e95b1276')), RunInfo(run_id=UUID('aa4c2a1c-5b08-4fcf-87aa-50228de46db5'))], type='LLMResult')

串流模型輸出

您可以串流模型輸出。

for chunk in watsonx_llm.stream(
"Describe your favorite breed of dog and why it is your favorite."
):
print(chunk, end="")
My favorite breed of dog is a Labrador Retriever. They are my favorite breed because they are my favorite color, yellow. They are also very smart and easy to train.

鏈結

建立 PromptTemplate 物件,其將負責建立隨機問題。

from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "

prompt = PromptTemplate.from_template(template)
API 參考:PromptTemplate

提供主題並執行鏈結。

llm_chain = prompt | watsonx_llm

topic = "dog"

llm_chain.invoke(topic)
'What is the origin of the name "Pomeranian"?'

API 參考

如需所有 WatsonxLLM 功能和組態的詳細文件,請前往API 參考


此頁面是否有幫助?