The purpose of tidylex is to provide a collaborative, open-source, cross-platform tool for tidying dictionary data stored as Toolbox-style backslash-coded data (a broad convention for serializing lexicographic data in a human-readable and -editable manner). This format is commonly used in the description of under-documented languages, many of which are also highly endangered.
The example below shows a toy French-to-English dictionary with 3 entries rouge, bonjour, and parler with various lexicographic information about these 3 entries (lx
: lexeme, ps
: part of speech, de
: definition, xv
: example, vernacular in source language, xe
: translation, English). Tidylex makes it easy to make assertions of how these entries should be structured, test whether or not they are well-structured (examples provided below), and, most importantly, communicate the results of these tests with relevant parties.
\lx rouge \ps adjective \de red \xv La chaise est rouge \xe The chair is red \lx bonjour \de hello \ps exclamation \lx parler \ps verb \de speak \xv Parlez-vous français?
Owing to the dictionary data having been hand-edited over many years, often by multiple contributors, there is often a lot of structural inconsistency in these plain-text files. Given the structural variation, the knowledge about these languages are effectively ‘locked up’ in terms of machine-processability. Tidylex provides a set of functions to iteratively work towards a well-structured, or ‘tidy’, lexicon, and maintain the tidiness of the lexicon when used within a Continuous Testing setting (e.g. with Travis CI, or GitLab pipelines).
As there is project-to-project variation in coding convention (for \lx
, some others use \me
for main entry, or ‘.
’ in place of backslashes, e.g. .i bonjour
), the read_lexicon
function provides a quick way to specify a regular expression to parse each line in the dictionary into its various components.
library(tidylex)
# The path to the 'rouge, bonjour, parler' dictionary shown in the example above
lexicon_file <- system.file("extdata", "error-french.txt", package = "tidylex")
lexicon_df <- read_lexicon(
file = lexicon_file,
regex = "\\\\?([a-z]*)\\s?(.*)", # Note two capture groups, in parentheses
into = c("code", "value") # Captured data placed, respectively, in 'code' and 'value' columns
)
lexicon_df
#> # A tibble: 14 x 4
#> line data code value
#> <int> <chr> <chr> <chr>
#> 1 1 "\\lx rouge" lx rouge
#> 2 2 "\\ps adjective" ps adjective
#> 3 3 "\\de red" de red
#> 4 4 "\\xv La chaise est rouge" xv La chaise est rouge
#> 5 5 "\\xe The chair is red" xe The chair is red
#> 6 6 "" "" ""
#> 7 7 "\\lx bonjour" lx bonjour
#> 8 8 "\\de hello" de hello
#> 9 9 "\\ps exclamation" ps exclamation
#> 10 10 "" "" ""
#> 11 11 "\\lx parler" lx parler
#> 12 12 "\\ps verb" ps verb
#> 13 13 "\\de speak" de speak
#> 14 14 "\\xv Parlez-vous français?" xv Parlez-vous français?
A common pre-processing routine that must be done is to group the lines into various subgroups, e.g. within some given entry, or within some given sense of the entry, etc. We can easily do this with the add_group_col
function.
grouped_lxdf <-
lexicon_df %>%
add_group_col(
name = lx_group, # Name of the new grouping column
where = code == "lx", # When to fill with a value, i.e. when *not* to inherit value
value = paste0(line, ": ", value) # What the value should be when above condition is true
)
grouped_lxdf
#> # A tibble: 14 x 5
#> # Groups: lx_group [3]
#> line data code value lx_group
#> <int> <chr> <chr> <chr> <chr>
#> 1 1 "\\lx rouge" lx rouge 1: rouge
#> 2 2 "\\ps adjective" ps adjective 1: rouge
#> 3 3 "\\de red" de red 1: rouge
#> 4 4 "\\xv La chaise est rouge" xv La chaise est rouge 1: rouge
#> 5 5 "\\xe The chair is red" xe The chair is red 1: rouge
#> 6 6 "" "" "" 1: rouge
#> 7 7 "\\lx bonjour" lx bonjour 7: bonjo…
#> 8 8 "\\de hello" de hello 7: bonjo…
#> 9 9 "\\ps exclamation" ps exclamation 7: bonjo…
#> 10 10 "" "" "" 7: bonjo…
#> 11 11 "\\lx parler" lx parler 11: parl…
#> 12 12 "\\ps verb" ps verb 11: parl…
#> 13 13 "\\de speak" de speak 11: parl…
#> 14 14 "\\xv Parlez-vous français?" xv Parlez-vous français? 11: parl…
Tidylex lets you define and use basic Nearley grammars within R to test for well-formedness of sequence of backslash codes.
For such sequences above, we can define a context-free grammar (equivalent to phrase structure rules) within the Nearley notation below (:?
, :+
are quantifiers indicating, respectively, ‘zero or one’ and ‘one or more’ of the preceding entity). We use the compile_grammar
function to generate code that can be used to test whether a series of values (e.g. those within the the code
column) conform to a sequence expected by some grammar.
entry_parser <- compile_grammar('
headword -> "lx" "ps" "de" examples:?
examples -> ("xv" "xe"):+
')
grouped_lxdf %>%
mutate(code_ok = entry_parser$parse_str(code, return_labels = TRUE))
#> # A tibble: 14 x 6
#> # Groups: lx_group [3]
#> line data code value lx_group code_ok
#> <int> <chr> <chr> <chr> <chr> <lgl>
#> 1 1 "\\lx rouge" lx rouge 1: rouge TRUE
#> 2 2 "\\ps adjective" ps adjective 1: rouge TRUE
#> 3 3 "\\de red" de red 1: rouge TRUE
#> 4 4 "\\xv La chaise est ro… xv La chaise est rou… 1: rouge TRUE
#> 5 5 "\\xe The chair is red" xe The chair is red 1: rouge TRUE
#> 6 6 "" "" "" 1: rouge TRUE
#> 7 7 "\\lx bonjour" lx bonjour 7: bonjo… TRUE
#> 8 8 "\\de hello" de hello 7: bonjo… FALSE
#> 9 9 "\\ps exclamation" ps exclamation 7: bonjo… NA
#> 10 10 "" "" "" 7: bonjo… NA
#> 11 11 "\\lx parler" lx parler 11: parl… TRUE
#> 12 12 "\\ps verb" ps verb 11: parl… TRUE
#> 13 13 "\\de speak" de speak 11: parl… TRUE
#> 14 14 "\\xv Parlez-vous fran… xv Parlez-vous franç… 11: parl… NA
We can see from the data frame above that the sequence of codes for entry group 1:rouge
(lx ps de xv xe
) conforms to the grammar, while the group 7: bonjour
does not. We can see that there is a value FALSE
for code_ok
for the de
line (line 8).
Footnotes