跳到主要內容

Llama2Chat

此筆記本展示如何使用 Llama2Chat 封裝器來擴充 Llama-2 LLM,以支援 Llama-2 聊天提示格式。LangChain 中的數個 LLM 實作可以用作 Llama-2 聊天模型的介面。這些包括 ChatHuggingFaceLlamaCppGPT4All 等等,僅舉幾個例子。

Llama2Chat 是一個通用封裝器,實作了 BaseChatModel,因此可以在應用程式中用作聊天模型Llama2Chat 將訊息列表轉換為所需的聊天提示格式,並將格式化後的提示以 str 形式轉發到封裝的 LLM

from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_experimental.chat_models import Llama2Chat

對於以下聊天應用程式範例,我們將使用以下聊天 prompt_template

from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)

template_messages = [
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
prompt_template = ChatPromptTemplate.from_messages(template_messages)

透過 HuggingFaceTextGenInference LLM 與 Llama-2 聊天

HuggingFaceTextGenInference LLM 封裝了對 text-generation-inference 伺服器的存取。在以下範例中,推論伺服器提供 meta-llama/Llama-2-13b-chat-hf 模型。它可以透過以下方式在本機啟動:

docker run \
--rm \
--gpus all \
--ipc=host \
-p 8080:80 \
-v ~/.cache/huggingface/hub:/data \
-e HF_API_TOKEN=${HF_API_TOKEN} \
ghcr.io/huggingface/text-generation-inference:0.9 \
--hostname 0.0.0.0 \
--model-id meta-llama/Llama-2-13b-chat-hf \
--quantize bitsandbytes \
--num-shard 4

例如,這可以在具有 4 個 RTX 3080ti 顯示卡的機器上運作。請根據可用的 GPU 數量調整 --num_shard 值。HF_API_TOKEN 環境變數保存 Hugging Face API 令牌。

# !pip3 install text-generation

建立一個 HuggingFaceTextGenInference 實例,該實例連接到本機推論伺服器並將其封裝到 Llama2Chat 中。

from langchain_community.llms import HuggingFaceTextGenInference

llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:8080/",
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
)

model = Llama2Chat(llm=llm)

然後,您就可以在 LLMChain 中一起使用聊天 modelprompt_template 和對話 memory

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
 Sure, I'd be happy to help! Here are a few popular locations to consider visiting in Vienna:

1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. Vienna State Opera
7. Albertina Museum
8. Museum of Natural History
9. Kunsthistorisches Museum
10. Ringstrasse
print(chain.run(text="Tell me more about #2."))
 Certainly! St. Stephen's Cathedral (Stephansdom) is one of the most recognizable landmarks in Vienna and a must-see attraction for visitors. This stunning Gothic cathedral is located in the heart of the city and is known for its intricate stone carvings, colorful stained glass windows, and impressive dome.

The cathedral was built in the 12th century and has been the site of many important events throughout history, including the coronation of Holy Roman emperors and the funeral of Mozart. Today, it is still an active place of worship and offers guided tours, concerts, and special events. Visitors can climb up the south tower for panoramic views of the city or attend a service to experience the beautiful music and chanting.

透過 LlamaCPP LLM 與 Llama-2 聊天

若要將 Llama-2 聊天模型與 LlamaCPP LMM 搭配使用,請使用這些安裝說明安裝 llama-cpp-python 函式庫。以下範例使用量化的 llama-2-7b-chat.Q4_0.gguf 模型,該模型在本機儲存在 ~/Models/llama-2-7b-chat.Q4_0.gguf

在建立 LlamaCpp 實例後,llm 再次封裝到 Llama2Chat

from os.path import expanduser

from langchain_community.llms import LlamaCpp

model_path = expanduser("~/Models/llama-2-7b-chat.Q4_0.gguf")

llm = LlamaCpp(
model_path=model_path,
streaming=False,
)
model = Llama2Chat(llm=llm)
API 參考:LlamaCpp

並以與先前範例相同的方式使用。

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
  Of course! Vienna is a beautiful city with a rich history and culture. Here are some of the top tourist attractions you might want to consider visiting:
1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. MuseumsQuartier
7. Ringstrasse
8. Vienna State Opera
9. Kunsthistorisches Museum
10. Imperial Palace

These are just a few of the many amazing places to see in Vienna. Each one has its own unique history and charm, so I hope you enjoy exploring this beautiful city!
``````output

llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 56.40 ms / 144 runs ( 0.39 ms per token, 2553.37 tokens per second)
llama_print_timings: prompt eval time = 1444.25 ms / 47 tokens ( 30.73 ms per token, 32.54 tokens per second)
llama_print_timings: eval time = 8832.02 ms / 143 runs ( 61.76 ms per token, 16.19 tokens per second)
llama_print_timings: total time = 10645.94 ms
print(chain.run(text="Tell me more about #2."))
Llama.generate: prefix-match hit
``````output
Of course! St. Stephen's Cathedral (also known as Stephansdom) is a stunning Gothic-style cathedral located in the heart of Vienna, Austria. It is one of the most recognizable landmarks in the city and is considered a symbol of Vienna.
Here are some interesting facts about St. Stephen's Cathedral:
1. History: The construction of St. Stephen's Cathedral began in the 12th century on the site of a former Romanesque church, and it took over 600 years to complete. The cathedral has been renovated and expanded several times throughout its history, with the most significant renovation taking place in the 19th century.
2. Architecture: St. Stephen's Cathedral is built in the Gothic style, characterized by its tall spires, pointed arches, and intricate stone carvings. The cathedral features a mix of Romanesque, Gothic, and Baroque elements, making it a unique blend of styles.
3. Design: The cathedral's design is based on the plan of a cross with a long nave and two shorter arms extending from it. The main altar is
``````output

llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 100.60 ms / 256 runs ( 0.39 ms per token, 2544.73 tokens per second)
llama_print_timings: prompt eval time = 5128.71 ms / 160 tokens ( 32.05 ms per token, 31.20 tokens per second)
llama_print_timings: eval time = 16193.02 ms / 255 runs ( 63.50 ms per token, 15.75 tokens per second)
llama_print_timings: total time = 21988.57 ms

此頁面是否對您有幫助?