Tesseract.js is a javascript library that gets words in almost any language out of images. (Demo)
Tesseract.js works with script tags, webpack/Browserify, and Node.js. After you install it, using it is as simple as
Tesseract.recognize(myImage)
.progress(function (p) { console.log('progress', p) })
.then(function (result) { console.log('result', result) })
Check out the docs for a full treatment of the API.
Tesseract.js wraps an emscripten port of the Tesseract OCR Engine.
Tesseract.js works with a <script>
tag via local copy or CDN, with webpack and Browserify via npm
, and on Node.js via npm
. Check out the docs for a full treatment of the API.
You can simply include Tesseract.js with a CDN like this:
<script src='https://cdn.jsdelivr.net/gh/naptha/tesseract.js@v1.0.14/dist/tesseract.min.js'></script>
After including your scripts, the Tesseract
variable will be defined globally!
First:
> yarn add tesseract.js
or
> npm install tesseract.js --save
Note: Tesseract.js currently requires Node.js v6.8.0 or higher.
var Tesseract = require('tesseract.js')
or
import Tesseract from 'tesseract.js'
Tesseract.recognize(image: ImageLike[, options]) -> TesseractJob
Figures out what words are in image
, where the words are in image
, etc.
Note:
image
should be sufficiently high resolution. Often, the same image will get much better results if you upscale it before callingrecognize
.
image
is any ImageLike object.options
is either absent (in which case it is interpreted as'eng'
), a string specifing a language short code from the language list, or a flat json object that may:- include properties that override some subset of the default tesseract parameters
- include a
lang
property with a value from the list of lang parameters
Returns a TesseractJob whose then
, progress
, catch
and finally
methods can be used to act on the result.
Tesseract.recognize(myImage)
.then(function(result){
console.log(result)
})
// if we know our image is of spanish words without the letter 'e':
Tesseract.recognize(myImage, {
lang: 'spa',
tessedit_char_blacklist: 'e'
})
.then(function(result){
console.log(result)
})
Tesseract.detect(image: ImageLike) -> TesseractJob
Figures out what script (e.g. 'Latin', 'Chinese') the words in image are written in.
image
is any ImageLike object.
Returns a TesseractJob whose then
, progress
, catch
and finally
methods can be used to act on the result of the script.
Tesseract.detect(myImage)
.then(function(result){
console.log(result)
})
The main Tesseract.js functions take an image
parameter, which should be something that is like an image. What's considered "image-like" differs depending on whether it is being run from the browser or through NodeJS.
On a browser, an image can be:
- an
img
,video
, orcanvas
element - a CanvasRenderingContext2D (returned by
canvas.getContext('2d')
) - a
File
object (from a file<input>
or drag-drop event) - a
Blob
object - a
ImageData
instance (an object containingwidth
,height
anddata
properties) - a path or URL to an accessible image (the image must either be hosted locally or accessible by CORS)
In Node.js, an image can be
- a path to a local image
- a
Buffer
instance containing aPNG
orJPEG
image - a
ImageData
instance (an object containingwidth
,height
anddata
properties)
A TesseractJob is an object returned by a call to recognize
or detect
. It's inspired by the ES6 Promise interface and provides then
and catch
methods. It also provides finally
method, which will be fired regardless of the job fate. One important difference is that these methods return the job itself (to enable chaining) rather than new.
Typical use is:
Tesseract.recognize(myImage)
.progress(message => console.log(message))
.catch(err => console.error(err))
.then(result => console.log(result))
.finally(resultOrError => console.log(resultOrError))
Which is equivalent to:
var job1 = Tesseract.recognize(myImage);
job1.progress(message => console.log(message));
job1.catch(err => console.error(err));
job1.then(result => console.log(result));
job1.finally(resultOrError => console.log(resultOrError));
Sets callback
as the function that will be called every time the job progresses.
callback
is a function with the signaturecallback(progress)
whereprogress
is a json object.
For example:
Tesseract.recognize(myImage)
.progress(function(message){console.log('progress is: ', message)})
The console will show something like:
progress is: {loaded_lang_model: "eng", from_cache: true}
progress is: {initialized_with_lang: "eng"}
progress is: {set_variable: Object}
progress is: {set_variable: Object}
progress is: {recognized: 0}
progress is: {recognized: 0.3}
progress is: {recognized: 0.6}
progress is: {recognized: 0.9}
progress is: {recognized: 1}
Sets callback
as the function that will be called if and when the job successfully completes.
callback
is a function with the signaturecallback(result)
whereresult
is a json object.
For example:
Tesseract.recognize(myImage)
.then(function(result){console.log('result is: ', result)})
The console will show something like:
result is: {
blocks: Array[1]
confidence: 87
html: "<div class='ocr_page' id='page_1' ..."
lines: Array[3]
oem: "DEFAULT"
paragraphs: Array[1]
psm: "SINGLE_BLOCK"
symbols: Array[33]
text: "Hello Worldโตfrom beyondโตthe Cosmic Voidโตโต"
version: "3.04.00"
words: Array[7]
}
Sets callback
as the function that will be called if the job fails.
callback
is a function with the signaturecallback(error)
whereerror
is a json object.
Sets callback
as the function that will be called regardless if the job fails or success.
callback
is a function with the signaturecallback(resultOrError)
whereresultOrError
is a json object.
In the browser, tesseract.js
simply provides the API layer. Internally, it opens a WebWorker to handle requests. That worker itself loads code from the Emscripten-built tesseract.js-core
which itself is hosted on a CDN. Then it dynamically loads language files hosted on another CDN.
Because of this we recommend loading tesseract.js
from a CDN. But if you really need to have all your files local, you can use the Tesseract.create
function which allows you to specify custom paths for workers, languages, and core.
window.Tesseract = Tesseract.create({
workerPath: '/path/to/worker.js',
langPath: 'https://tessdata.projectnaptha.com/3.02/',
corePath: 'https://cdn.jsdelivr.net/gh/naptha/tesseract.js-core@0.1.0/index.js',
})
A string specifying the location of the tesseract.js-core library, with default value 'https://cdn.jsdelivr.net/gh/naptha/tesseract.js-core@0.1.0/index.js'. Set this string before calling Tesseract.recognize
and Tesseract.detect
if you want Tesseract.js to use a different file.
A string specifying the location of the worker.js file. Set this string before calling Tesseract.recognize
and Tesseract.detect
if you want Tesseract.js to use a different file.
A string specifying the location of the tesseract language files, with default value 'https://cdn.jsdelivr.net/gh/naptha/tessdata@gh-pages/3.02/'. Language file URLs are calculated according to the formula langPath + langCode + '.traineddata.gz'
. Set this string before calling Tesseract.recognize
and Tesseract.detect
if you want Tesseract.js to use different language files.
To run a development copy of tesseract.js, first clone this repo.
> git clone https://github.com/naptha/tesseract.js.git
Then, cd tesseract.js && npm install && npm start
> cd tesseract.js
> npm install && npm start
... a bunch of npm stuff ...
Starting up http-server, serving ./
Available on:
http://127.0.0.1:7355
http://[your ip]:7355
Then open http://localhost:7355/examples/file-input/demo.html
in your favorite browser. The devServer automatically rebuilds tesseract.js
and tesseract.worker.js
when you change files in the src folder.
After you've cloned the repo and run npm install
as described in the Development Section, you can build static library files in the dist folder with
> npm run build
Thanks :)