Intel 的視覺資料管理系統 (VDMS)
本筆記本涵蓋如何開始使用 VDMS 作為向量儲存區。
Intel 的 Visual Data Management System (VDMS) 是一種儲存解決方案,用於有效存取大型「視覺」資料,旨在透過搜尋儲存為圖形的視覺元資料來尋找相關視覺資料,並啟用機器友善的視覺資料增強功能以加快存取速度,進而實現雲端規模。VDMS 以 MIT 授權條款授權。如需
VDMS
的更多資訊,請造訪此頁面,並在此處找到 LangChain API 參考資料:這裡。
VDMS 支援
- K 近鄰搜尋
- 歐幾里得距離 (L2) 和內積 (IP)
- 用於索引和計算距離的程式庫:FaissFlat (預設)、FaissHNSWFlat、FaissIVFFlat、Flinng、TileDBDense、TileDBSparse
- 文字、影像和影片的嵌入
- 向量和元資料搜尋
設定
若要存取 VDMS 向量儲存區,您需要安裝 langchain-vdms
整合套件,並透過公開提供的 Docker 映像部署 VDMS 伺服器。為了簡化,本筆記本將在本機主機上使用埠 55555 部署 VDMS 伺服器。
%pip install -qU "langchain-vdms>=0.1.3"
!docker run --no-healthcheck --rm -d -p 55555:55555 --name vdms_vs_test_nb intellabs/vdms:latest
!sleep 5
Note: you may need to restart the kernel to use updated packages.
c464076e292613df27241765184a673b00c775cecb7792ef058591c2cbf0bde8
憑證
您可以使用 VDMS
而無需任何憑證。
如果您想要取得模型呼叫的自動追蹤,您也可以透過取消註解下方內容來設定您的 LangSmith API 金鑰
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
初始化
使用 VDMS Client 連接到 VDMS 向量儲存區,使用 FAISS IndexFlat 索引 (預設) 和歐幾里得距離 (預設) 作為相似度搜尋的距離度量。
pip install -qU langchain-openai
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
from langchain_vdms.vectorstores import VDMS, VDMS_Client
collection_name = "test_collection_faiss_L2"
vdms_client = VDMS_Client(host="localhost", port=55555)
vector_store = VDMS(
client=vdms_client,
embedding=embeddings,
collection_name=collection_name,
engine="FaissFlat",
distance_strategy="L2",
)
管理向量儲存區
將項目新增至向量儲存區
import logging
logging.basicConfig()
logging.getLogger("langchain_vdms.vectorstores").setLevel(logging.INFO)
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
id=1,
)
document_2 = Document(
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
metadata={"source": "news"},
id=2,
)
document_3 = Document(
page_content="Building an exciting new project with LangChain - come check it out!",
metadata={"source": "tweet"},
id=3,
)
document_4 = Document(
page_content="Robbers broke into the city bank and stole $1 million in cash.",
metadata={"source": "news"},
id=4,
)
document_5 = Document(
page_content="Wow! That was an amazing movie. I can't wait to see it again.",
metadata={"source": "tweet"},
id=5,
)
document_6 = Document(
page_content="Is the new iPhone worth the price? Read this review to find out.",
metadata={"source": "website"},
id=6,
)
document_7 = Document(
page_content="The top 10 soccer players in the world right now.",
metadata={"source": "website"},
id=7,
)
document_8 = Document(
page_content="LangGraph is the best framework for building stateful, agentic applications!",
metadata={"source": "tweet"},
id=8,
)
document_9 = Document(
page_content="The stock market is down 500 points today due to fears of a recession.",
metadata={"source": "news"},
id=9,
)
document_10 = Document(
page_content="I have a bad feeling I am going to get deleted :(",
metadata={"source": "tweet"},
id=10,
)
documents = [
document_1,
document_2,
document_3,
document_4,
document_5,
document_6,
document_7,
document_8,
document_9,
document_10,
]
doc_ids = [str(i) for i in range(1, 11)]
vector_store.add_documents(documents=documents, ids=doc_ids)
['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
如果多次提供 ID,add_documents
不會檢查 ID 是否唯一。因此,請使用 upsert
在新增之前刪除現有的 ID 條目。
vector_store.upsert(documents, ids=doc_ids)
{'succeeded': ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'],
'failed': []}
更新向量儲存區中的項目
updated_document_1 = Document(
page_content="I had chocolate chip pancakes and fried eggs for breakfast this morning.",
metadata={"source": "tweet"},
id=1,
)
updated_document_2 = Document(
page_content="The weather forecast for tomorrow is sunny and warm, with a high of 82 degrees.",
metadata={"source": "news"},
id=2,
)
vector_store.update_documents(
ids=doc_ids[:2],
documents=[updated_document_1, updated_document_2],
batch_size=2,
)
從向量儲存區刪除項目
vector_store.delete(ids=doc_ids[-1])
True
查詢向量儲存區
一旦您的向量儲存區建立完成,並且新增了相關文件,您很可能希望在執行您的鏈或代理程式期間查詢它。
直接查詢
可以按如下方式執行簡單的相似度搜尋
results = vector_store.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=2,
filter={"source": ["==", "tweet"]},
)
for doc in results:
print(f"* ID={doc.id}: {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0063 seconds
``````output
* ID=3: Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* ID=8: LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
如果您想要執行相似度搜尋並接收相應的分數,您可以執行
results = vector_store.similarity_search_with_score(
"Will it be hot tomorrow?", k=1, filter={"source": ["==", "news"]}
)
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0460 seconds
``````output
* [SIM=0.753577] The weather forecast for tomorrow is sunny and warm, with a high of 82 degrees. [{'source': 'news'}]
如果您想要使用嵌入執行相似度搜尋,您可以執行
results = vector_store.similarity_search_by_vector(
embedding=embeddings.embed_query("I love green eggs and ham!"), k=1
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0044 seconds
``````output
* The weather forecast for tomorrow is sunny and warm, with a high of 82 degrees. [{'source': 'news'}]
透過轉換為檢索器來查詢
您也可以將向量儲存區轉換為檢索器,以便在您的鏈中更輕鬆地使用。
retriever = vector_store.as_retriever(
search_type="similarity",
search_kwargs={"k": 3},
)
results = retriever.invoke("Stealing from the bank is a crime")
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0042 seconds
``````output
* Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}]
* The stock market is down 500 points today due to fears of a recession. [{'source': 'news'}]
* Is the new iPhone worth the price? Read this review to find out. [{'source': 'website'}]
retriever = vector_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
"k": 1,
"score_threshold": 0.0, # >= score_threshold
},
)
results = retriever.invoke("Stealing from the bank is a crime")
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0042 seconds
``````output
* Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}]
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1, "fetch_k": 10},
)
results = retriever.invoke(
"Stealing from the bank is a crime", filter={"source": ["==", "news"]}
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
INFO:langchain_vdms.vectorstores:VDMS mmr search took 0.0042 secs
``````output
* Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}]
刪除集合
先前,我們根據文件的 id
移除文件。在這裡,由於未提供 ID,因此會移除所有文件。
print("Documents before deletion: ", vector_store.count())
vector_store.delete(collection_name=collection_name)
print("Documents after deletion: ", vector_store.count())
Documents before deletion: 10
Documents after deletion: 0
用於檢索增強生成的使用方式
如需如何將此向量儲存區用於檢索增強生成 (RAG) 的指南,請參閱以下章節
使用其他引擎的相似度搜尋
VDMS 支援各種用於索引和計算距離的程式庫:FaissFlat (預設)、FaissHNSWFlat、FaissIVFFlat、Flinng、TileDBDense 和 TileDBSparse。依預設,向量儲存區使用 FaissFlat。以下我們展示一些使用其他引擎的範例。
使用 Faiss HNSWFlat 和歐幾里得距離的相似度搜尋
在這裡,我們使用 Faiss IndexHNSWFlat 索引和 L2 作為相似度搜尋的距離度量,將文件新增至 VDMS。我們搜尋與查詢相關的三個文件 (k=3
),並傳回文件以及分數。
db_FaissHNSWFlat = VDMS.from_documents(
documents,
client=vdms_client,
ids=doc_ids,
collection_name="my_collection_FaissHNSWFlat_L2",
embedding=embeddings,
engine="FaissHNSWFlat",
distance_strategy="L2",
)
# Query
k = 3
query = "LangChain provides abstractions to make working with LLMs easy"
docs_with_score = db_FaissHNSWFlat.similarity_search_with_score(query, k=k, filter=None)
for res, score in docs_with_score:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
INFO:langchain_vdms.vectorstores:Descriptor set my_collection_FaissHNSWFlat_L2 created
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.1272 seconds
``````output
* [SIM=0.716791] Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* [SIM=0.936718] LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
* [SIM=1.834110] Is the new iPhone worth the price? Read this review to find out. [{'source': 'website'}]
使用 Faiss IVFFlat 和內積 (IP) 距離的相似度搜尋
我們使用 Faiss IndexIVFFlat 索引和 IP 作為相似度搜尋的距離度量,將文件新增至 VDMS。我們搜尋與查詢相關的三個文件 (k=3
),並傳回文件以及分數。
db_FaissIVFFlat = VDMS.from_documents(
documents,
client=vdms_client,
ids=doc_ids,
collection_name="my_collection_FaissIVFFlat_IP",
embedding=embeddings,
engine="FaissIVFFlat",
distance_strategy="IP",
)
k = 3
query = "LangChain provides abstractions to make working with LLMs easy"
docs_with_score = db_FaissIVFFlat.similarity_search_with_score(query, k=k, filter=None)
for res, score in docs_with_score:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
INFO:langchain_vdms.vectorstores:Descriptor set my_collection_FaissIVFFlat_IP created
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0052 seconds
``````output
* [SIM=0.641605] Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* [SIM=0.531641] LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
* [SIM=0.082945] Is the new iPhone worth the price? Read this review to find out. [{'source': 'website'}]
使用 FLINNG 和 IP 距離的相似度搜尋
在本節中,我們使用「篩選以識別近鄰群組」(FLINNG) 索引和 IP 作為相似度搜尋的距離度量,將文件新增至 VDMS。我們搜尋與查詢相關的三個文件 (k=3
),並傳回文件以及分數。
db_Flinng = VDMS.from_documents(
documents,
client=vdms_client,
ids=doc_ids,
collection_name="my_collection_Flinng_IP",
embedding=embeddings,
engine="Flinng",
distance_strategy="IP",
)
# Query
k = 3
query = "LangChain provides abstractions to make working with LLMs easy"
docs_with_score = db_Flinng.similarity_search_with_score(query, k=k, filter=None)
for res, score in docs_with_score:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
INFO:langchain_vdms.vectorstores:Descriptor set my_collection_Flinng_IP created
INFO:langchain_vdms.vectorstores:VDMS similarity search took 0.0042 seconds
``````output
* [SIM=0.000000] I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]
* [SIM=0.000000] I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]
* [SIM=0.000000] I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]
依元資料篩選
在開始使用集合之前,縮小範圍可能會很有幫助。
例如,可以使用 get_by_constraints
方法依元資料篩選集合。字典用於篩選元資料。在這裡,我們檢索 langchain_id = "2"
的文件,並將其從向量儲存區中移除。
注意: id
是作為額外元資料以整數形式產生的,而 langchain_id
(內部 ID) 則是每個條目的唯一字串。
response, response_array = db_FaissIVFFlat.get_by_constraints(
db_FaissIVFFlat.collection_name,
limit=1,
include=["metadata", "embeddings"],
constraints={"langchain_id": ["==", "2"]},
)
# Delete id=2
db_FaissIVFFlat.delete(collection_name=db_FaissIVFFlat.collection_name, ids=["2"])
print("Deleted entry:")
for doc in response:
print(f"* ID={doc.id}: {doc.page_content} [{doc.metadata}]")
Deleted entry:
* ID=2: The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]
response, response_array = db_FaissIVFFlat.get_by_constraints(
db_FaissIVFFlat.collection_name,
include=["metadata"],
)
for doc in response:
print(f"* ID={doc.id}: {doc.page_content} [{doc.metadata}]")
* ID=10: I have a bad feeling I am going to get deleted :( [{'source': 'tweet'}]
* ID=9: The stock market is down 500 points today due to fears of a recession. [{'source': 'news'}]
* ID=8: LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
* ID=7: The top 10 soccer players in the world right now. [{'source': 'website'}]
* ID=6: Is the new iPhone worth the price? Read this review to find out. [{'source': 'website'}]
* ID=5: Wow! That was an amazing movie. I can't wait to see it again. [{'source': 'tweet'}]
* ID=4: Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}]
* ID=3: Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* ID=1: I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]
由於 id
是整數,因此在這裡我們使用 id
來篩選 ID 範圍。
response, response_array = db_FaissIVFFlat.get_by_constraints(
db_FaissIVFFlat.collection_name,
include=["metadata", "embeddings"],
constraints={"source": ["==", "news"]},
)
for doc in response:
print(f"* ID={doc.id}: {doc.page_content} [{doc.metadata}]")
* ID=9: The stock market is down 500 points today due to fears of a recession. [{'source': 'news'}]
* ID=4: Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}]
停止 VDMS 伺服器
!docker kill vdms_vs_test_nb
vdms_vs_test_nb
API 參考
TODO:新增 API 參考