Experimenting with Transkribus

Over the last few days, I decided to look into Transkribus after our brief discussion of the software in class. The idea of training an artificial intelligence to learn a person’s handwriting sounds straight out of science fiction. In theory, it would drastically shorten the time historians spend combing through documents and allow us more time to analyze the material. Additionally, in the summer of 2018, I was paid to transcribe various documents from Nova Scotia’s Mi’kmaq Holdings Resource Guide, and so it was interesting to see if this software could have made part of my summer job obsolete.

The first page of an 1847 Report to Indian Affairs that I transcribed last year. Source: Commissioner of Indian Affairs Nova Scotia Archives MG 15 Vol. 4 No. 2

After downloading the software and its corresponding step by step guide, I began to experiment with its capabilities. By default, Transkribus works well without utilizing its Handwritten Text Recognition (HTR) features. Once your file is uploaded, you can automatically segment your documents into lines for traditional transcription. When you are done transcribing the text, you can export the document alongside its transcription.

Transkribus open with the 1847 Indian Affairs Report

The HTR factor is where it gets more complicated. By default, Transkribus comes with 5 HTR models to use (Dutch, Church Slavonic, German, Russian Church Slavonic and English) with any documents. These models are very basic and are designed to work with a range of different handwriting styles. If you wish to train an HTR model after a specific individual’s handwriting, you require at least 15,000 manually transcribed words. (printed text requires 5,000 words)

Therefore, while it is clear that Transkribus could work wonders for a historian studying the diary or notes of one individual, it would not have worked well for my summer job. Most of the documents I transcribed were 1-2-page petitions from a host of different people with unique handwriting styles. The document I tested Transkribus with, which was my longest at 18 pages, was not near the 15,000-word requirement for a specific HTR model. I tried using the base English HTR model on the first few pages, and the results were less than stellar.

2019-09-23 (10)
My manual transcription compared to the basic English HTR model. The red underlined sections are what the system got wrong, and the green underlined are my corrections. The white sections are what the software got right.

Despite its shortcomings with my particular workload, I see many advantages in using this software for any future transcription project. The overall experience was much easier compared to transcribing with Microsoft Word as Transkribus solves most formatting issues with its automatic line segmenting feature. Moreover, in some instances, the base English HTR correctly transcribed a few words that I could not decipher last year. Therefore, a user could save time on transcribing by using the base English HTR model first on their document and then search through the results to fix the errors. Such a strategy could work well in any crowdsourced project that involves transcription, as the time commitment and difficulty would be lowered while still offering the public a chance to engage with the material. The Confederation Debates, a crowdsourced project I volunteered with two years ago, utilized this strategy in its transcription project, and as a newcomer to the practice, it was really easy to get involved with the project.

It will be fascinating to watch the growth of this software in the next 5-10 years. At some point, I am sure that a user will only have to transcribe a few lines/pages before an AI can accurately finish the transcription. Will this future development affect the nature of crowdsourced projects? Will museums, libraries and other institutions that currently employ crowdsourced transcription projects trade the advantages that come with getting the public engaged with the material for the speed of an Artificial Intelligence? It is hard to say for certain. 

2 thoughts on “Experimenting with Transkribus

Add yours

  1. Hey Tom,

    Thanks for doing this! I really wanted to try out Transkribus this week but I focused on Omeka and didn’t have the time, so its great to read your post and get an idea of how the software works and what it is capable of. It makes sense that training an HTR model for someone’s handwriting would take a significant amount of words, but 15,000 seems impractical for many types of documents. Having said that, even though it is imperfect for smaller works, if it can help in even the smallest way by identifying difficult words, it seems worth it to use. Maybe I could see, in the not so distant future, a compromise where museums, archives, libraries etc. begin using AI to transcribe, and use crowdsourcing to identify and correct errors.


    1. Hey Erik,
      I agree with your assessment. Even with entirely accurate AI transcriptions, museums and other institutions would likely want to still foster an online community around their material, and crowdsourcing work seems to be one of the best ways to do it.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

A WordPress.com Website.

Up ↑

%d bloggers like this: