wiki:HabitSystemV3

D1.2.3 The HaBiT system v3: Third and pre-final integrated system prototype

Corpus building tools

The HaBiT system prototype is accessible at ​http://corpora.fi.muni.cz/habit/.

Software

  • WebBootCaT for HaBiT, a search engine querying and document downloading tool. It queries a search engine with user specified words or phrases, obtains URLs of relevant documents fount by the search engine and downloads the documents. This tool is an implementation of Baroni, Kilgarriff, Pomikálek, Rychlý. "WebBootCaT: instant domain-specific corpora to support human translators." In Proceedings of EAMT, pp. 247-252. 2006.
  • SpiderLing, a web spider for linguistics — is software for obtaining text from the web useful for building text corpora. Many documents on the web only contain material not suitable for text corpora, such as site navigation, lists of links, lists of products, and other kind of text not comprised of full sentences. In fact such pages represent the vast majority of the web. Therefore, by doing unrestricted web crawls, we typically download a lot of data which gets filtered out during post-processing. This makes the process of web corpus collection inefficient.
  • Chared is a tool for detecting the character encoding of a text in a known language. The language of the text has to be specified as an input parameter so that correspondent language model can be used. The package contains models for a wide range of languages. In general, it should be more accurate than character encoding detection algorithms with no language constraints.
  • Justext is a tool for removing boilerplate content, such as navigation links, headers, and footers from HTML pages. It is designed to preserve mainly text containing full sentences and it is therefore well suited for creating linguistic resources such as Web corpora. A HTML page is split to paragraphs and a context sensitive heuristic algorithm is employed to separate content from boilerplate.
  • Onion is a tool for removing duplicate parts from large collections of texts. One instance of duplicate parts is kept, others are marked or removed. The tool allows removing both identical and similar documents, on any level (document, paragraph, sentence — if present in the data). The de-duplication algorithm is based on comparing hashes of n-grams of words.
  • Unitok is a universal text tokeniser with specific settings for many languages. It can turn plain text into a sequence of newline-separated tokens (vertical format), while preserving XML-like tags containing metadata.
  • geez2sera is a tool for converting (transliterating) Geez script into SERA, System for Ethiopic Representation using Latin ASCII characters. It is based on L3 Python library (GPL licence).
  • AmTag is a tagger for Amharic. It is TreeTagger with an Amharic model.
  • Language identification using trigrams (licensed under the PSF licence)

Language resources

Wordlists, top 1000 items

Language dependent models and configuration files

Notes:

  • Arabic models were used for distinguishing Ethiopian languages from Arabic.
  • Danish models were used for distinguishing Norwegian from Danish.
  • English models were used for distinguishing all languages within the project from English.
  • Slovak models were used for distinguishing Czech from Slovak.
Last modified 7 years ago Last modified on May 31, 2017, 6:23:34 PM

Attachments (44)

Download all attachments as: .zip