PyMuPDFLoader
This notebook provides a quick overview for getting started with PyMuPDF
document loader. For detailed documentation of all __ModuleName__Loader features and configurations head to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | JS support |
---|---|---|---|---|
PyMuPDFLoader | langchain_community | ✅ | ❌ | ❌ |
Loader features
Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |
---|---|---|---|---|
PyMuPDFLoader | ✅ | ❌ | ✅ | ✅ |
Setup
Credentials
No credentials are required to use PyMuPDFLoader
If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
Install langchain_community.
%pip install -qU langchain_community pymupdf pillow
Note: you may need to restart the kernel to use updated packages.
Note: you may need to restart the kernel to use updated packages.
Initialization
Now we can instantiate our model object and load documents:
from langchain_community.document_loaders import PyMuPDFLoader
file_path = "./example_data/layout-parser-paper.pdf"
loader = PyMuPDFLoader(file_path)
Load
docs = loader.load()
docs[0]
Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationdate': '2021-06-22T01:27:10+00:00', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021')
import pprint
pprint.pp(docs[0].metadata)
{'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
Lazy Load
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)
pages = []
len(pages)
6
Metadata attribute now contains several pieces of information about the file in addition to the total number of pages.
Why is this important? If we want to reference a document, we need to determine if it’s relevant. A reference is valid if it helps the user quickly locate the fragment within the document (using the page and/or a chunk excerpt). But if the URL points to a PDF file without a page number (for various reasons) and the file has a large number of pages, we want to remove the reference that doesn’t assist the user. There’s no point in referencing a 100-page document! The total_pages metadata can then be used.
pprint.pp(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
('LayoutParser: A Unified Toolkit for DL-Based DIA\n'
'11\n'
'focuses on precision, efficiency, and robustness. T')
{'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 10}
Extract the PDF by page. Each page is extracted as a langchain Document object:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
16
Extract the whole PDF as a single langchain Document object:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
1
Add a custom pages_delimitor to identify where are ends of pages in single mode:
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimitor="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1
Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
-------THIS IS A CUSTOM END OF PAGE-------
2
Z. Shen et al.
37], layout detection [38, 22], table detection [26], and scene text detection [4].
A generalized learning-based framework dramatically reduces the need for the
manual specification of complicated rules, which is the status quo with traditional
methods. DL has the potential to transform DIA pipelines and benefit a broad
spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of re-
cent advances in DL-based methods: 1) DL models are notoriously convoluted
for reuse and extension. Existing models are developed using distinct frame-
works like TensorFlow [1] or PyTorch [24], and the high-level parameters can
be obfuscated by implementation details [8]. It can be a time-consuming and
frustrating experience to debug, reproduce, and adapt existing models for DIA,
and many researchers who would benefit the most from using these methods lack
the technical background to implement them from scratch. 2) Document images
contain diverse and disparate patterns across domains, and customized training
is often required to achieve a desirable detection accuracy. Currently there is no
full-fledged infrastructure for easily curating the target document image datasets
and fine-tuning or re-training the models. 3) DIA usually requires a sequence of
models and other processing to obtain the final outputs. Often research teams use
DL models and then perform further document analyses in separate processes,
and these pipelines are not documented in any central location (and often not
documented at all). This makes it difficult for research teams to learn about how
full pipelines are implemented and leads them to invest significant resources in
reinventing the DIA wheel.
LayoutParser provides a unified toolkit to support DL-based document image
analysis and processing. To address the aforementioned challenges, LayoutParser
is built with the following components:
1. An off-the-shelf toolkit for applying DL models for layout detection, character
recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that
underlies the off-the-shelf usage
3. Comprehensive tools for efficient document image data annotation and model
tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribu-
tion, and discussion of DIA models and pipelines, to promote reusability,
reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing
generalizability and versatility, and can be easily installed via pip. Its convenient
functions for handling document image data can be seamlessly integrated with
existing DIA pipelines. With detailed documentations and carefully curated
tutorials, we hope this tool will benefit a variety of end-users, and will lead to
advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model
reusability in other disciplines like natural language processing [8, 34] and com-
puter vision [35], but with a focus on unique challenges in DIA. We show
LayoutParser can be applied in sophisticated and large-scale digitization projects
-------THIS IS A CUSTOM END OF PAGE-------
LayoutParser: A Unified Toolkit for DL-Based DIA
3
that require precision, efficiency, and robustness, as well as simple and light-
weigh
This could simply be \n, or \f to clearly indicate a page change, or <!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect.
Why is it important to identify page breaks when retrieving the full document flow? Because we generally want to provide a URL with the chunk’s location when the LLM answers. While it’s possible to reference the entire PDF, this isn’t practical if it’s more than two pages long. It’s better to indicate the specific page to display in the URL. Therefore, assistance is needed so that chunking algorithms can add the page metadata to each chunk. The choice of delimiter helps the algorithm prioritize this parameter.
Extract images from the PDF
In LangChain the OCR process used by the parsers involves asking the parser for the text on a page, then retrieving images to apply OCR.
In the previous implementation the text extracted from images was appended to the end of the page text. In a RAG context it means if in the original document a paragraph is spread across two pages it would have been cut in half by the OCR process putting the text (from the image) in between, worsening RAG model's performance.
To avoid this, we modified the strategy for injecting OCR results from images. Now, the result is inserted between the last and the second-to-last paragraphs of text (\n\n or \n) of the page.
Extract images from the PDF in html format (can also be markdown or text) with rapid OCR:
%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_rapidocr,
)
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_rapidocr(format="html"),
)
docs = loader.load()
print(docs[5].page_content)
6
Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2
Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
<img alt="Coordinate
(x1, y1)
(X1, y1)
(x2,y2)
APIS
x-interval
tart
end
Quadrilateral
operation
Rectangle
y-interval
ena
(x2, y2)
(x4, y4)
(x3, y3)
and
textblock
Coordinate
transformation
+
Block
Block
Reading
Extra features
Text
Type
Order
coordinatel
textblockl
layout
same
textblock2
layoutl
The
A list of the layout elements" />
RapidOCR is designed to work with Chinese and English, not other languages. Since the implementation uses a function rather than a method, it’s not possible to modify it. We have modified the various parsers to allow for selecting other algorithm to analyze images. Now, it’s possible to also use Tesseract, or invoke a multimodal LLM to get a description of the image.
Extract images from the PDF in text format (can also be html or markdown) with tesseract:
%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_tesseract,
)
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_tesseract(format="text"),
)
docs = loader.load()
print(docs[5].page_content)
6
Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2
Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
Coordinate
textblock
x-interval
JeAsaqul-A
Coordinate
+
Extra features
Rectangle
Quadrilateral
Block
Text
Block
Type
Reading
Order
layout
[ coordinatel textblock1 |
'
“y textblock2 , layout1 ]
A list of the layout elements
The same transformation and operation APIs
Extract images from the PDF in markdown format (can also be html or text) with vision model:
%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os
from dotenv import load_dotenv
# Charger les variables d'environnement à partir du fichier .env
load_dotenv()
True
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_description,
)
from langchain_openai import ChatOpenAI
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_images=True,
images_to_text=convert_images_to_description(
model=ChatOpenAI(model="gpt-4o", max_tokens=1024), format="markdown"
),
)
docs = loader.load()
print(docs[5].page_content)
6
Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2
Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
![Summary: The image illustrates a conceptual model for handling layout elements in a system, featuring coordinates and text blocks. It shows diagrams of a rectangle and quadrilateral with labeled points, intervals, and additional features like block text, type, and reading order. The layout consists of coordinates and text blocks, emphasizing transformation and operation APIs.
Extracted Text:
- Coordinate
- x-interval
- y-interval
- start
- end
- (x1, y1)
- (x2, y2)
- (x3, y3)
- (x4, y4)
- Rectangle
- Quadrilateral
- textblock
- Coordinate
- Extra features
- Block Text
- Block Type
- Reading Order
- layout
- [ coordinate1, textblock1, ..., textblock2, layout1 \\]
- A list of the layout elements
- The same transformation and operation APIs](.)
Extract tables from the PDF in markdown (can also be html or csv)
loader = PyMuPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
LayoutParser: A Unified Toolkit for DL-Based DIA
5
Table 1: Current layout detection models in the LayoutParser model zoo
Dataset
Base Model1 Large Model
Notes
PubLayNet [38]
F / M
M
Layouts of modern scientific documents
PRImA [3]
M
-
Layouts of scanned modern magazines and scientific reports
Newspaper [17]
F
-
Layouts of scanned US newspapers from the 20th century
TableBank [18]
F
F
Table region on modern scientific and business document
HJDataset [31]
F / M
-
Layouts of history Japanese documents
1 For each dataset, we train several models of different sizes for different needs (the trade-offbetween accuracy
vs. computational cost). For “base model” and “large model”, we refer to using the ResNet 50 or ResNet 101
backbones [13], respectively. One can train models of different architectures, like Faster R-CNN [28] (F) and Mask
R-CNN [12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained
using the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model
zoo in coming months.
layout data structures, which are optimized for efficiency and versatility. 3) When
necessary, users can employ existing or customized OCR models via the unified
API provided in the OCR module. 4) LayoutParser comes with a set of utility
functions for the visualization and storage of the layout data. 5) LayoutParser
is also highly customizable, via its integration with functions for layout data
annotation and model training. We now provide detailed descriptions for each
component.
3.1
Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and
generates a list of rectangular boxes for the target content regions. Different
from traditional methods, it relies on deep convolutional neural networks rather
than manually curated rules to identify content regions. It is formulated as an
object detection problem and state-of-the-art models like Faster R-CNN [28] and
Mask R-CNN [12] are used. This yields prediction results of high accuracy and
makes it possible to build a concise, generalized interface for layout detection.
LayoutParser, built upon Detectron2 [35], provides a minimal API that can
perform layout detection with only four lines of code in Python:
1 import
layoutparser as lp
2 image = cv2.imread("image_file") # load
images
3 model = lp. Detectron2LayoutModel (
4
"lp:// PubLayNet/ faster_rcnn_R_50_FPN_3x /config")
5 layout = model.detect(image)
LayoutParser provides a wealth of pre-trained model weights using various
datasets covering different languages, time periods, and document types. Due to
domain shift [7], the prediction performance can notably drop when models are ap-
plied to target samples that are significantly different from the training dataset. As
document structures and layouts vary greatly in different domains, it is important
to select models trained on a dataset similar to the test samples. A semantic syntax
is used for initializing the model weights in LayoutParser, using both the dataset
name and model name lp://<dataset-name>/<model-architecture-name>.
|Dataset|Base Model1|Large Model|Notes|
|---|---|---|---|
|PubLayNet [38] PRImA [3] Newspaper [17] TableBank [18] HJDataset [31]|F / M M F F F / M|M &#45; &#45; F &#45;|Layouts of modern scientific documents Layouts of scanned modern magazines and scientific reports Layouts of scanned US newspapers from the 20th century Table region on modern scientific and business document Layouts of history Japanese documents|
API reference
For detailed documentation of all PyMuPDFLoader
features and configurations head to the API reference: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html
Related
- Document loader conceptual guide
- Document loader how-to guides