wiki:HabitSystemFinal

Version 2 (modified by xsuchom2, 7 years ago) (diff)

Skeleton created, TODO Examples of concordance, sketches, wordlist

D1.3 The final HaBiT system: Tested and evaluated system demonstrator

The HaBiT System is accessible at ​http://corpora.fi.muni.cz/habit/. The following Corpus building tools were used to build eight corpora in Amharic, Oromo, Somali, Tigrinya, Norwegian and Czech:

Corpus building tools

Software

The following software was developed or adapted for the HaBiT System:

  • WebBootCaT for HaBiT, a search engine querying and document downloading tool. It queries a search engine with user specified words or phrases, obtains URLs of relevant documents fount by the search engine and downloads the documents. This tool is an implementation of Baroni, Kilgarriff, Pomikálek, Rychlý. "WebBootCaT: instant domain-specific corpora to support human translators." In Proceedings of EAMT, pp. 247-252. 2006.
  • SpiderLing, a web spider for linguistics — is software for obtaining text from the web useful for building text corpora. Many documents on the web only contain material not suitable for text corpora, such as site navigation, lists of links, lists of products, and other kind of text not comprised of full sentences. In fact such pages represent the vast majority of the web. Therefore, by doing unrestricted web crawls, we typically download a lot of data which gets filtered out during post-processing. This makes the process of web corpus collection inefficient.
  • Chared is a tool for detecting the character encoding of a text in a known language. The language of the text has to be specified as an input parameter so that correspondent language model can be used. The package contains models for a wide range of languages. In general, it should be more accurate than character encoding detection algorithms with no language constraints.
  • Justext is a tool for removing boilerplate content, such as navigation links, headers, and footers from HTML pages. It is designed to preserve mainly text containing full sentences and it is therefore well suited for creating linguistic resources such as Web corpora. A HTML page is split to paragraphs and a context sensitive heuristic algorithm is employed to separate content from boilerplate.
  • Onion is a tool for removing duplicate parts from large collections of texts. One instance of duplicate parts is kept, others are marked or removed. The tool allows removing both identical and similar documents, on any level (document, paragraph, sentence — if present in the data). The de-duplication algorithm is based on comparing hashes of n-grams of words.
  • Language identification using trigrams (licensed under the PSF licence)
  • Unitok is a universal text tokeniser with specific settings for many languages. It can turn plain text into a sequence of newline-separated tokens (vertical format), while preserving XML-like tags containing metadata.
  • geez2sera is a tool for converting (transliterating) Geez script into SERA, System for Ethiopic Representation using Latin ASCII characters. It is based on L3 Python library (GPL licence).
  • AmTag is a tagger for Amharic. It is TreeTagger with an Amharic model.
  • Part of speech taggers for Oromo, Somali and Tigrinya with respective models applying the Universal POS tagset.

Language dependent models and configuration files

The following language dependent models and configuration files were produced for each language within the project:

  • Language sample for language identification using a character trigram model.
  • Wordlist for language identification and boilerplate removal using Justext.
  • Byte trigram model for character encoding detection using Chared.
  • Tokenisation rules for tokenisation using Unitok.
  • SpiderLing crawler configuration.

Notes:

  • Separate Bokmål and Nynorsk models were created to distinguish these written standards of Norwegian.
  • Arabic, English, Slovak and Danish models were created for removing texts not in target languages from corpus source documents.
  • See D1.2.3 The HaBiT system v3 for files with all mentioned language dependent models and configuration files.

Corpora

All corpora in the HaBiT System can be accessed at a public address at http://corpora.fi.muni.cz/habit/. Examples of usage of the HaBiT corpora follow.

Amharic Web Corpus

Browse the Amharic Web Corpus

  • Corpus size: 20 million tokens.
  • Crawled by SpiderLing in August 2013, October 2015 and January 2016. Boilerplate-cleaned, de-duplicated.
  • Tagged by TreeTagger trained on Amharic WiC.
  • Corpus deliverable/technical report

TODO Examples of concordance, sketches, wordlist

Oromo Web Corpus

Browse the Oromo Web Corpus

  • Corpus size: 5.1 million tokens.
  • Crawled by SpiderLing in January 2016. Boilerplate-cleaned, de-duplicated.
  • Tagged with the Universal POS tagset.
  • Corpus deliverable/technical report

TODO Examples of concordance, sketches, wordlist

Somali Web Corpus

Browse the Somali Web Corpus

  • Corpus size: 80 million tokens.
  • Crawled by SpiderLing in January 2016. Boilerplate-cleaned, de-duplicated.
  • Tagged with the Universal POS tagset.
  • Corpus deliverable/technical report

TODO Examples of concordance, sketches, wordlist

Tigrinya Web Corpus

Browse the Tigrinya Web Corpus

  • Corpus size: 2.5 million tokens.
  • Crawled by SpiderLing in January 2016. Boilerplate-cleaned, de-duplicated.
  • Tagged with the Universal POS tagset.
  • Corpus deliverable/technical report

TODO Examples of concordance, sketches, wordlist

Norwegian Bokmål Web Corpus

Browse the Norwegian Bokmål Web Corpus

  • Corpus size: 1.4 billion tokens.
  • Crawled by SpiderLing in Febreuary 2015. Boilerplate-cleaned, de-duplicated. Separated from Norwegian Nynorsk.
  • Tagged by Oslo-Bergen Tagger.

TODO Examples of concordance, sketches, wordlist

Norwegian Nynorsk Web Corpus

Browse the Norwegian Nynorsk Web Corpus

  • Corpus size: 1.4 billion tokens.
  • Crawled by SpiderLing in Febreuary 2015. Boilerplate-cleaned, de-duplicated. Separated from Norwegian Bokmål.
  • Tagged by Oslo-Bergen Tagger.

TODO Examples of concordance, sketches, wordlist

Czech Web Corpus

Browse the Czech Web Corpus

  • Corpus size: 9.3 billion tokens.
  • Crawled by SpiderLing November and December 2015 and October to December 2016. Boilerplate-cleaned, de-duplicated.
  • Tagged by Czech POS tagger Majka.

TODO Examples of concordance, sketches, wordlist

Czech-Norwegian/Norwegian-Czech Parallel Corpus

Browse the Czech-Norwegian Parallel Corpus, or the Norwegian-Czech Parallel Corpus

TODO Examples of concordance, sketches, wordlist