Grobid
GROBID 是一個機器學習函式庫,用於提取、剖析和重新建構原始文件。
它被設計且預期用於剖析學術論文,在這方面表現特別好。注意:如果提供給 Grobid 的文章是大型文件(例如,論文),且超出特定數量的元素,則可能無法處理。
此載入器使用 Grobid 將 PDF 剖析為 Documents
,這些 Documents
保留與文字部分相關聯的中繼資料。
最好的方法是透過 Docker 安裝 Grobid,請參閱 https://grobid.readthedocs.io/en/latest/Grobid-docker/。
(注意:其他說明可以在這裡找到。)
一旦 Grobid 啟動並執行後,您可以如下所述進行互動。
現在,我們可以開始使用資料載入器。
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import GrobidParser
API 參考:GenericLoader | GrobidParser
loader = GenericLoader.from_filesystem(
"../Papers/",
glob="*",
suffixes=[".pdf"],
parser=GrobidParser(segment_sentences=False),
)
docs = loader.load()
docs[3].page_content
'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.'
docs[3].metadata
{'text': 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.',
'para': '2',
'bboxes': "[[{'page': '1', 'x': '317.05', 'y': '509.17', 'h': '207.73', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '522.72', 'h': '220.08', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '536.27', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '549.82', 'h': '218.65', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '563.37', 'h': '136.98', 'w': '9.46'}], [{'page': '1', 'x': '446.49', 'y': '563.37', 'h': '78.11', 'w': '9.46'}, {'page': '1', 'x': '304.69', 'y': '576.92', 'h': '138.32', 'w': '9.46'}], [{'page': '1', 'x': '447.75', 'y': '576.92', 'h': '76.66', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '590.47', 'h': '219.63', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '604.02', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '617.56', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '631.11', 'h': '220.18', 'w': '9.46'}]]",
'pages': "('1', '1')",
'section_title': 'Introduction',
'section_number': '1',
'paper_title': 'LLaMA: Open and Efficient Foundation Language Models',
'file_path': '/Users/31treehaus/Desktop/Papers/2302.13971.pdf'}