How to extract text with OCR from a PDF on Linux?
Solution 1
I have had success with the BSD-licensed Linux port of Cuneiform OCR system.
No binary packages seem to be available, so you need to build it from source. Be sure to have the ImageMagick C++ libraries installed to have support for essentially any input image format (otherwise it will only accept BMP).
While it appears to be essentially undocumented apart from a brief README file, I've found the OCR results quite good. The nice thing about it is that it can output position information for the OCR text in hOCR format, so that it becomes possible to put the text back in in the correct position in a hidden layer of a PDF file. This way you can create "searchable" PDFs from which you can copy text.
I have used hocr2pdf to recreate PDFs out of the original image-only PDFs and OCR results. Sadly, the program does not appear to support creating multi-page PDFs, so you might have to create a script to handle them:
#!/bin/bash
# Run OCR on a multi-page PDF file and create a new pdf with the
# extracted text in hidden layer. Requires cuneiform, hocr2pdf, gs.
# Usage: ./dwim.sh input.pdf output.pdf
set -e
input="$1"
output="$2"
tmpdir="$(mktemp -d)"
# extract images of the pages (note: resolution hard-coded)
gs -SDEVICE=tiffg4 -r300x300 -sOutputFile="$tmpdir/page-%04d.tiff" -dNOPAUSE -dBATCH -- "$input"
# OCR each page individually and convert into PDF
for page in "$tmpdir"/page-*.tiff
do
base="${page%.tiff}"
cuneiform -f hocr -o "$base.html" "$page"
hocr2pdf -i "$page" -o "$base.pdf" < "$base.html"
done
# combine the pages into one PDF
gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile="$output" "$tmpdir"/page-*.pdf
rm -rf -- "$tmpdir"
Please note that the above script is very rudimentary. For example, it does not retain any PDF metadata.
Solution 2
Google docs will now use OCR to convert your uploaded image/pdf documents to text. I have had good success with it.
They are using the OCR system that is used for the gigantic Google Books project.
However, it must be noted that only PDFs to a size of 2 MB will be accepted for processing.
Update
1. To try it out, upload a <2MB pdf to google docs from a web browser.
2. Right click on the uploaded document and click "Open with Google Docs".
...Google Docs will convert to text and output to a new file with same name but Google Docs type in same folder.
Solution 3
Best and easyest way out there is to use pypdfocr
it doesn't change the pdf
pypdfocr your_document.pdf
At the end you will have another your_document_ocr.pdf
the way you want it with searchable text. The app doesn't change the quality of the image. Increases the size of the file a bit by adding the overlay text.
Update 3rd november 2018:
pypdfocr
is no longer supported since 2016 and I noticed some problems due to not being mentained. ocrmypdf
(module) does a symiliar job and can be used like this:
ocrmypdf in.pdf out.pdf
To install:
pip install ocrmypdf
or
apt install ocrmypdf
Solution 4
Geza Kovacs has made an Ubuntu package that is basically a script using hocr2pdf
as Jukka suggested, but makes things a bit faster to setup.
From Geza's Ubuntu forum post with details on the package...
Adding the repository and installing in Ubuntu
sudo add-apt-repository ppa:gezakovacs/pdfocr
sudo apt-get update
sudo apt-get install pdfocr
Running ocr on a file
pdfocr -i input.pdf -o output.pdf
GitHub repository for the code https://github.com/gkovacs/pdfocr/
Solution 5
PDFBeads works well for me. This thread “Convert Scanned Images to a Single PDF File” got me up and running. For a b&w book scan, you need to:
- Create an image for every page of the PDF; either of the gs examples above should work
- Generate hOCR output for each page; I used tesseract (but note that Cuneiform seems to work better).
- Move the images and the hOCR files to a new folder; the filenames must correspond, so file001.tif needs file001.html, file002.tif file002.html, etc.
-
In the new folder, run
pdfbeads * > ../Output.pdf
This will put the collated, OCR'd PDF in the parent directory.
Related videos on Youtube
agentofuser
I'm a software engineer with 15+ years of experience. I have written front-end and back-end code used by millions of people while at Qype GmbH (acquired by Yelp). I'm currently specializing in React, IPFS, and functional programming. I have designed, implemented, and marketed products from the ground up, lead teams of engineers, and founded my own startups, one them funded in the Start-Up Chile program. I have worked remotely for many years, both on open source projects (3x participant in the Google Summer of Code program) and commercial ones. I'm a good communicator and team member. I'm self-directed and disciplined with my time. I studied Computer Engineering at Brazil's (often ranked as) #1 higher education and research institution (both in general and specifically in Computer Science): University of Campinas. I've lived for almost 2 years in the United States. I can read, write, and speak English fluently. Feel free to get in touch at [email protected]. More contact information at https://agentofuser.com.
Updated on September 17, 2022Comments
-
agentofuser over 1 year
How do I extract text from a PDF that wasn't built with an index? It's all text, but I can't search or select anything. I'm running Kubuntu, and Okular doesn't have this feature.
-
Gökhan Sever almost 13 yearsAny idea to improve this script to add spell-checking stage to correct errors in recognition step?
-
Jukka Matilainen almost 13 years@Gökhan Sever, do you mean adding interactive spell-checking where the user is prompted for replacement for misspelled/unknown words? I think you could do that by adding something like
aspell check --mode=html "$base.html"
in the script right after running cuneiform. -
Pitto about 12 yearsThe answer is not really Ubuntu-specific but I want to really thank you: BRILLIANT solution! :)
-
Keks Dose over 11 yearsSmall correction: The line for tesseract at least for other languages than English, here e.g. German ( = deu ) is: ` tesseract "$page" "$base" -l deu hocr ` (of course you have to remove the ` `).
-
Admin over 11 yearsAs I had problems with not so accurate pdfs I changed the engine in gs from "tiff4" to "tiffgray" - and the result was very good:
gs -SDEVICE=tiffgray -r300x300 -sOutputFile="$tmpdir/page-%04d.tiff" -dNOPAUSE -dBATCH -- "$input"
-
fixer1234 over 9 yearsI found ABBYY OCR to be pretty pitiful, one of the least capable programs I've tried. It might be adequate with a really clean image of standard font text of typical body text size, with no mixed fonts, mixed sizes, complex layout, graphics, lines, etc.
-
fixer1234 over 9 yearsIf what you need isn't covered in other answers here, the best thing to do is ask your own question. That will get it exposure to a lot of eyes.
-
Wikunia over 9 years@GökhanSever I'll get this error:
Tesseract Open Source OCR Engine v3.03 with Leptonica OSD: Weak margin (0.00) for 571 blob text block, but using orientation anyway: 0 /usr/bin/pdf2text: line 23: /tmp/tmp.XksXutALLp/page-0001.html: No such file or directory
when I use your version. Any idea what I'm doing wrong? -
fixer1234 about 9 yearsThis post states that the product can do it, which is a helpful hint that should be posted as a comment. It doesn't explain how to actually solve the problem, which is what answers should do. Can you expand your answer so that someone can see how to do the solution?
-
Scanner.js Receipt Invoice OCR about 9 yearsThanks @fixer1234, I've edited it to include the command.
-
Gaurav about 6 yearsThis was really helpful :) I uploaded a 50 MB file yesterday and it worked. Looks like they've increased the size limit.
-
David Milovich almost 6 years@Wikunia change $base.html to $base.hocr