ModelScopeEmbeddings
ModelScope (首頁 | GitHub) 建構於「模型即服務」(MaaS) 的概念之上。它旨在匯集 AI 社群中最先進的機器學習模型,並簡化在實際應用中利用 AI 模型之流程。此儲存庫中開源的核心 ModelScope 程式庫提供介面和實作,讓開發人員能夠執行模型推論、訓練和評估。
這將幫助您開始使用 LangChain 的 ModelScope 嵌入模型。
概觀
整合細節
提供者 | 套件 |
---|---|
ModelScope | langchain-modelscope-integration |
設定
若要存取 ModelScope 嵌入模型,您需要建立 ModelScope 帳戶、取得 API 金鑰,並安裝 langchain-modelscope-integration
整合套件。
憑證
前往 ModelScope 註冊 ModelScope。
import getpass
import os
if not os.getenv("MODELSCOPE_SDK_TOKEN"):
os.environ["MODELSCOPE_SDK_TOKEN"] = getpass.getpass(
"Enter your ModelScope SDK token: "
)
安裝
LangChain ModelScope 整合位於 langchain-modelscope-integration
套件中
%pip install -qU langchain-modelscope-integration
實例化
現在我們可以實例化我們的模型物件
from langchain_modelscope import ModelScopeEmbeddings
embeddings = ModelScopeEmbeddings(
model_id="damo/nlp_corom_sentence-embedding_english-base",
)
Downloading Model to directory: /root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base
``````output
2024-12-27 16:15:11,175 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0
2024-12-27 16:15:11,443 - modelscope - INFO - initiate model from /root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base
2024-12-27 16:15:11,444 - modelscope - INFO - initiate model from location /root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base.
2024-12-27 16:15:11,445 - modelscope - INFO - initialize model from /root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base
2024-12-27 16:15:12,115 - modelscope - WARNING - No preprocessor field found in cfg.
2024-12-27 16:15:12,116 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-12-27 16:15:12,116 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base'}. trying to build by task and model information.
2024-12-27 16:15:12,318 - modelscope - WARNING - No preprocessor field found in cfg.
2024-12-27 16:15:12,319 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-12-27 16:15:12,319 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/root/.cache/modelscope/hub/damo/nlp_corom_sentence-embedding_english-base', 'sequence_length': 128}. trying to build by task and model information.
索引和檢索
嵌入模型通常用於檢索增強生成 (RAG) 流程中,既作為索引資料的一部分,也用於稍後檢索資料。如需更詳細的說明,請參閱我們的 RAG 教學課程。
在下方,查看如何使用我們在上面初始化的 embeddings
物件來索引和檢索資料。在此範例中,我們將在 InMemoryVectorStore
中索引和檢索範例文件。
# Create a vector store with a sample text
from langchain_core.vectorstores import InMemoryVectorStore
text = "LangChain is the framework for building context-aware reasoning applications"
vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)
# Use the vectorstore as a retriever
retriever = vectorstore.as_retriever()
# Retrieve the most similar text
retrieved_documents = retriever.invoke("What is LangChain?")
# show the retrieved document's content
retrieved_documents[0].page_content
/root/miniconda3/envs/langchain/lib/python3.10/site-packages/transformers/modeling_utils.py:1113: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
/root/miniconda3/envs/langchain/lib/python3.10/site-packages/transformers/modeling_utils.py:1113: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
'LangChain is the framework for building context-aware reasoning applications'
直接使用
在幕後,vectorstore 和 retriever 實作正在呼叫 embeddings.embed_documents(...)
和 embeddings.embed_query(...)
,以為 from_texts
和檢索 invoke
操作中使用的文字建立嵌入。
您可以直接呼叫這些方法來取得您自己使用案例的嵌入。
嵌入單一文本
您可以使用 embed_query
嵌入單一文本或文件
single_vector = embeddings.embed_query(text)
print(str(single_vector)[:100]) # Show the first 100 characters of the vector
[-0.6046376824378967, -0.3595953583717346, 0.11333226412534714, -0.030444221571087837, 0.23397332429
嵌入多個文本
您可以使用 embed_documents
嵌入多個文本
text2 = (
"LangGraph is a library for building stateful, multi-actor applications with LLMs"
)
two_vectors = embeddings.embed_documents([text, text2])
for vector in two_vectors:
print(str(vector)[:100]) # Show the first 100 characters of the vector
[-0.6046381592750549, -0.3595949709415436, 0.11333223432302475, -0.030444379895925522, 0.23397321999
[-0.36103254556655884, -0.7602502107620239, 0.6505364775657654, 0.000658963865134865, 1.185304522514
API 參考
如需 ModelScopeEmbeddings
功能和組態選項的詳細文件,請參閱 API 參考。