跳到主要內容
Open In ColabOpen on GitHub

PyMuPDFLoader

本筆記本提供快速概觀,以開始使用 PyMuPDF 文件載入器。有關所有 __ModuleName__Loader 功能和配置的詳細文檔,請參閱 API 參考文檔

概述

整合細節

類別套件本地可序列化JS 支援
PyMuPDFLoaderlangchain_community

載入器功能

來源文件延遲載入原生非同步支援提取圖片提取表格
PyMuPDFLoader

設定

憑證

使用 PyMuPDFLoader 不需要憑證

如果您想要取得模型呼叫的自動化最佳類別追蹤,您也可以設定您的 LangSmith API 金鑰,取消註解下方即可

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

安裝

安裝 langchain_communitypymupdf

%pip install -qU langchain_community pymupdf
Note: you may need to restart the kernel to use updated packages.

初始化

現在我們可以實例化我們的模型物件並載入文件

from langchain_community.document_loaders import PyMuPDFLoader

file_path = "./example_data/layout-parser-paper.pdf"
loader = PyMuPDFLoader(file_path)
API 參考文檔:PyMuPDFLoader

載入

docs = loader.load()
docs[0]
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2  [cs.CV]  21 Jun 2021')
import pprint

pprint.pp(docs[0].metadata)
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}

惰性載入

pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)

pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA
11
focuses on precision, efficiency, and robustness. T
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 10}

metadata 屬性至少包含以下鍵

  • source
  • page (如果在 page 模式下)
  • total_page
  • creationdate
  • creator
  • producer

其他 metadata 針對每個解析器各有不同。這些資訊片段可能很有用 (例如,用於分類您的 PDF)。

分割模式 & 自訂頁面分隔符

載入 PDF 檔案時,您可以透過兩種不同的方式分割它

  • 逐頁
  • 作為單一文字流

預設情況下,PDFPlumberLoader 將逐頁分割 PDF。

逐頁提取 PDF。每個頁面都作為 langchain Document 物件提取:

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}

在此模式下,pdf 按頁面分割,產生的 Documents metadata 包含頁碼。但在某些情況下,我們可能希望將 pdf 作為單一文字流處理 (這樣我們就不會將某些段落切成兩半)。在這種情況下,您可以使用 single 模式

將整個 PDF 提取為單一 langchain Document 物件:

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': ''}

從邏輯上講,在此模式下,‘page_number’ metadata 會消失。以下是如何清楚識別文字流中頁面結束的位置

新增自訂 pages_delimiter 以識別 single 模式下頁面結束的位置:

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])

這可以只是 \n 或 \f 以清楚指示頁面變更,或是 <!-- PAGE BREAK --> 以在 Markdown 檢視器中無縫注入而沒有視覺效果。

從 PDF 中提取圖片

您可以從 PDF 中提取圖片,並可選擇三種不同的解決方案

  • rapidOCR (輕量級光學字元辨識工具)
  • Tesseract (高精度的 OCR 工具)
  • 多模態語言模型

您可以調整這些功能,以選擇提取圖片的輸出格式,包括 htmlmarkdowntext

結果會插入在頁面文字的最後一段和倒數第二段之間。

使用 rapidOCR 從 PDF 中提取圖片:

%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()

print(docs[5].page_content)
API 參考文檔:RapidOCRBlobParser

請注意,RapidOCR 旨在與中文和英文搭配使用,不適用於其他語言。

使用 Tesseract 從 PDF 中提取圖片:

%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
API 參考文檔:TesseractBlobParser

使用多模態模型從 PDF 中提取圖片:

%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os

from dotenv import load_dotenv

load_dotenv()
True
from getpass import getpass

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
API 參考文檔:LLMImageBlobParser | ChatOpenAI

從 PDF 中提取表格

使用 PyMUPDF,您可以從 PDF 中提取 htmlmarkdowncsv 格式的表格

loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
LayoutParser: A Unified Toolkit for DL-Based DIA
5
Table 1: Current layout detection models in the LayoutParser model zoo
Dataset
Base Model1 Large Model
Notes
PubLayNet [38]
F / M
M
Layouts of modern scientific documents
PRImA [3]
M
-
Layouts of scanned modern magazines and scientific reports
Newspaper [17]
F
-
Layouts of scanned US newspapers from the 20th century
TableBank [18]
F
F
Table region on modern scientific and business document
HJDataset [31]
F / M
-
Layouts of history Japanese documents
1 For each dataset, we train several models of different sizes for different needs (the trade-offbetween accuracy
vs. computational cost). For “base model” and “large model”, we refer to using the ResNet 50 or ResNet 101
backbones [13], respectively. One can train models of different architectures, like Faster R-CNN [28] (F) and Mask
R-CNN [12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained
using the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model
zoo in coming months.
layout data structures, which are optimized for efficiency and versatility. 3) When
necessary, users can employ existing or customized OCR models via the unified
API provided in the OCR module. 4) LayoutParser comes with a set of utility
functions for the visualization and storage of the layout data. 5) LayoutParser
is also highly customizable, via its integration with functions for layout data
annotation and model training. We now provide detailed descriptions for each
component.
3.1
Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and
generates a list of rectangular boxes for the target content regions. Different
from traditional methods, it relies on deep convolutional neural networks rather
than manually curated rules to identify content regions. It is formulated as an
object detection problem and state-of-the-art models like Faster R-CNN [28] and
Mask R-CNN [12] are used. This yields prediction results of high accuracy and
makes it possible to build a concise, generalized interface for layout detection.
LayoutParser, built upon Detectron2 [35], provides a minimal API that can
perform layout detection with only four lines of code in Python:
1 import
layoutparser as lp
2 image = cv2.imread("image_file") # load
images
3 model = lp. Detectron2LayoutModel (
4
"lp:// PubLayNet/ faster_rcnn_R_50_FPN_3x /config")
5 layout = model.detect(image)
LayoutParser provides a wealth of pre-trained model weights using various
datasets covering different languages, time periods, and document types. Due to
domain shift [7], the prediction performance can notably drop when models are ap-
plied to target samples that are significantly different from the training dataset. As
document structures and layouts vary greatly in different domains, it is important
to select models trained on a dataset similar to the test samples. A semantic syntax
is used for initializing the model weights in LayoutParser, using both the dataset
name and model name lp://<dataset-name>/<model-architecture-name>.


|Dataset|Base Model1|Large Model|Notes|
|---|---|---|---|
|PubLayNet [38] PRImA [3] Newspaper [17] TableBank [18] HJDataset [31]|F / M M F F F / M|M &amp;#45; &amp;#45; F &amp;#45;|Layouts of modern scientific documents Layouts of scanned modern magazines and scientific reports Layouts of scanned US newspapers from the 20th century Table region on modern scientific and business document Layouts of history Japanese documents|

處理檔案

許多文件載入器都涉及解析檔案。此類載入器之間的差異通常源於檔案的解析方式,而不是檔案的載入方式。例如,您可以使用 open 讀取 PDF 或 markdown 檔案的二進制內容,但您需要不同的解析邏輯才能將該二進制數據轉換為文字。

因此,將解析邏輯與載入邏輯分離可能會有所幫助,這使得重複使用給定的解析器變得更容易,而無需考慮數據的載入方式。您可以使用此策略來分析不同的檔案,並使用相同的解析參數。

from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyMuPDFParser

loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PyMuPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1
Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
{'source': 'example_data/layout-parser-paper.pdf',
'file_path': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}

可以處理來自雲端儲存空間的檔案。

from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader

loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3:/mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PyMuPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
API 參考文檔:CloudBlobLoader | GenericLoader

API 參考文檔

有關所有 PyMuPDFLoader 功能和配置的詳細文檔,請參閱 API 參考文檔:https://langchain-python.dev.org.tw/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html


此頁面是否對您有幫助?