跳到主要內容
Open on GitHub

LangChain Decorators ✨

Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.

`LangChain decorators` 是 LangChain 之上的圖層,提供語法糖 🍭 以撰寫自訂 langchain 提示和鏈。

如需意見回饋、問題、貢獻 - 請在此處提出 issue:ju-bezdek/langchain-decorators

主要原則和優點

  • 更 `pythonic` 的程式碼撰寫方式
  • 撰寫不會因縮排而破壞程式碼流程的多行提示
  • 利用 IDE 內建的提示、類型檢查和文件彈出視窗支援,快速查看函式以了解提示、其使用的參數等。
  • 充分利用 🦜🔗 LangChain 生態系統的所有功能
  • 增加對選用參數的支援
  • 透過將參數綁定到一個類別,輕鬆在提示之間共享參數

這是一個使用 **LangChain Decorators ✨** 撰寫的簡單程式碼範例


@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

快速開始

安裝

pip install langchain_decorators

範例

開始的好方法是查看此處的範例

定義其他參數

在此,我們僅使用 `llm_prompt` 裝飾器將函式標記為提示,有效地將其轉變為 LLMChain。而不是執行它

標準 LLMChain 需要比 inputs_variables 和 prompt 更多的初始化參數... 此實作細節隱藏在裝飾器中。以下是其運作方式

  1. 使用全域設定
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
  1. 使用預定義的提示類型
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
...

  1. 直接在裝飾器中定義設定
from langchain_openai import OpenAI

@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
...
)
def creative_writer(book_title:str)->str:
...
API 參考文檔:OpenAI

傳遞記憶體和/或回呼:

若要傳遞任何這些,只需在函式中宣告它們(或使用 kwargs 傳遞任何內容)


@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass

await write_me_short_post(topic="old movies")

簡化串流

如果我們想要利用串流

  • 我們需要將提示定義為非同步函式
  • 開啟裝飾器上的串流,或者我們可以定義啟用串流的 PromptType
  • 使用 StreamingContext 捕捉串流

這樣我們只需標記應該串流哪個提示,而無需修改我們應該使用哪個 LLM,在鏈的特定部分傳遞創建和分配串流處理程序... 只需在提示/提示類型上開啟/關閉串流...

只有當我們在串流上下文中呼叫它時才會發生串流... 在那裡我們可以定義一個簡單的函式來處理串流

# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass



# just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await run_prompt()
print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

提示宣告

預設情況下,提示是整個函式文檔,除非您標記您的提示

為您的提示編寫文檔

透過指定帶有 <prompt> 語言標籤的程式碼區塊,我們可以指定文檔的哪個部分是提示定義

@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.

It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)

現在只有上面的程式碼區塊將被用作提示,而文檔字串的其餘部分將被用作開發人員的描述。(它還有一個很好的優點,即 IDE(如 VS Code)將正確顯示提示(不會嘗試將其解析為 markdown,因此不會正確顯示換行符))""" return


## Chat messages prompt

For chat models is very useful to define prompt as a set of message templates... here is how to do it:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` suffix inside the <prompt:_role_> tag


```<prompt:system>
You are a {agent_role} hacker. You mus act like one.
You reply always in code, using python or javascript code block...
for example:

... do not reply with anything else.. just with code - respecting your role.

human message

(我們正在使用 LLM 強制的真實角色 - GPT 支援 system、assistant、user)

Helo, who are you

a reply

\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
\```

我們也可以使用佔位符新增一些歷史記錄

{history}
{human_input}

現在只有上面的程式碼區塊將被用作提示,而文檔字串的其餘部分將被用作開發人員的描述。(它還有一個很好的優點,即 IDE(如 VS Code)將正確顯示提示(不會嘗試將其解析為 markdown,因此不會正確顯示換行符))""" pass


the roles here are model native roles (assistant, user, system for chatGPT)



# Optional sections
- you can define a whole sections of your prompt that should be optional
- if any input in the section is missing, the whole section won't be rendered

the syntax for this is as follows:

``` python
@llm_prompt
def prompt_with_optional_partials():
"""
this text will be rendered always, but

{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}

you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
is not empty?} !
"""

輸出解析器

  • `llm_prompt` 裝飾器原生嘗試根據輸出類型偵測最佳輸出解析器。(如果未設定,則傳回原始字串)
  • 也原生(自動)支援 list、dict 和 pydantic 輸出
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
""" Write me {count} good name suggestions for company that {company_business}
"""
pass

write_name_suggestions(company_business="sells cookies", count=5)

更複雜的結構

對於 dict / pydantic,您需要指定格式化指示... 這可能很繁瑣,因此您可以讓輸出解析器根據模型 (pydantic) 為您產生指示。

from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
name:str = Field (description="The name of the company")
headline:str = Field( description="The description of the company (for landing page)")
employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

將提示綁定到物件

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
assistant_name:str
assistant_role:str
field:str

@property
def a_property(self):
return "whatever"

def hello_world(self, function_kwarg:str=None):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""


@llm_prompt
def introduce_your_self(self)->str:
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)

"""

personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")

print(personality.introduce_your_self(personality))



# More examples:

- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators

此頁面是否對您有幫助?