跳至主要內容

Eden AI

Eden AI 正在透過聯合最佳的 AI 供應商來徹底改變 AI 的格局,賦予使用者釋放無限可能並挖掘人工智慧的真正潛力。 透過一體化的全面且無憂的平台,它允許使用者以閃電般的速度將 AI 功能部署到生產環境,從而透過單一 API 輕鬆存取完整的 AI 功能。 (網站:https://edenai.co/)

此範例將說明如何使用 LangChain 與 Eden AI 模型互動


存取 EDENAI 的 API 需要 API 金鑰,

您可以透過建立帳戶來取得它 https://app.edenai.run/user/register 並前往這裡 https://app.edenai.run/admin/account/settings

取得金鑰後,我們會想要透過執行以下指令將其設定為環境變數

export EDENAI_API_KEY="..."

如果您不想設定環境變數,您可以透過 edenai_api_key 命名參數直接傳遞金鑰

在初始化 EdenAI LLM 類別時

from langchain_community.llms import EdenAI
API 參考:EdenAI
llm = EdenAI(edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250)

呼叫模型

EdenAI API 匯集了各種供應商,每個供應商都提供多種模型。

若要存取特定模型,您只需在實例化期間新增 'model' 即可。

例如,讓我們探索 OpenAI 提供的模型,例如 GPT3.5

文字生成

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate

llm = EdenAI(
feature="text",
provider="openai",
model="gpt-3.5-turbo-instruct",
temperature=0.2,
max_tokens=250,
)

prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""

llm(prompt)
API 參考:LLMChain | PromptTemplate

圖片生成

import base64
from io import BytesIO

from PIL import Image


def print_base64_image(base64_string):
# Decode the base64 string into binary data
decoded_data = base64.b64decode(base64_string)

# Create an in-memory stream to read the binary data
image_stream = BytesIO(decoded_data)

# Open the image using PIL
image = Image.open(image_stream)

# Display the image
image.show()
text2image = EdenAI(feature="image", provider="openai", resolution="512x512")
image_output = text2image("A cat riding a motorcycle by Picasso")
print_base64_image(image_output)

具有回調的文字生成

from langchain_community.llms import EdenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler

llm = EdenAI(
callbacks=[StreamingStdOutCallbackHandler()],
feature="text",
provider="openai",
temperature=0.2,
max_tokens=250,
)
prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""
print(llm.invoke(prompt))

串聯呼叫

from langchain.chains import LLMChain, SimpleSequentialChain
from langchain_core.prompts import PromptTemplate
llm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)
text2image = EdenAI(feature="image", provider="openai", resolution="512x512")
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)

chain = LLMChain(llm=llm, prompt=prompt)
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
third_prompt = PromptTemplate(
input_variables=["company_logo_description"],
template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(
chains=[chain, chain_two, chain_three], verbose=True
)
output = overall_chain.run("hats")
# print the image
print_base64_image(output)

此頁面是否對您有幫助?