跳到主要內容
Open In ColabOpen on GitHub

TextGen

GitHub:oobabooga/text-generation-webui 一個 gradio 網頁 UI,用於運行大型語言模型,如 LLaMA、llama.cpp、GPT-J、Pythia、OPT 和 GALACTICA。

此範例說明如何透過 text-generation-webui API 整合使用 LangChain 與 LLM 模型互動。

請確保您已設定 text-generation-webui 並安裝了 LLM。建議透過適用於您作業系統的 一鍵安裝程式 進行安裝。

一旦 text-generation-webui 安裝完成並透過網頁介面確認運作正常後,請透過網頁模型設定標籤啟用 api 選項,或將運行時參數 --api 新增至您的啟動命令。

設定 model_url 並運行範例

model_url = "https://127.0.0.1:5000"
from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import TextGen
from langchain_core.prompts import PromptTemplate

set_debug(True)

template = """Question: {question}

Answer: Let's think step by step."""


prompt = PromptTemplate.from_template(template)
llm = TextGen(model_url=model_url)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

llm_chain.run(question)

串流版本

您應安裝 websocket-client 才能使用此功能。pip install websocket-client

model_url = "ws://127.0.0.1:5005"
from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import TextGen
from langchain_core.callbacks import StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate

set_debug(True)

template = """Question: {question}

Answer: Let's think step by step."""


prompt = PromptTemplate.from_template(template)
llm = TextGen(
model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()]
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

llm_chain.run(question)
llm = TextGen(model_url=model_url, streaming=True)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'", "\n"]):
print(chunk, end="", flush=True)

此頁面是否對您有幫助?