Some tips on writing software for research

Posted on Wed 17 July 2019 in misc • Tagged with programming, nlp

These are my notes for a presentation in our group at Saarland University. The presentation was mainly about software written as part of experiments in NLP, but most of the tips do not focus on NLP but rather on writing code for reproducible experiments that involve processing data sets. This kind of software is often only used by a small group of people (down to groups of one). I neither claim that this is the ultimate guide nor that I actually follow all my own advice (but I try!).

Why you should care

Our success is not measured by how nice our software looks, the important aspects are scientific insights and publications. So why should you spend additional time on your code, which is just a means to an end? There are several good reasons:

Others might want to build upon your results. This often means running your software. If that is not easy, they might not do it. I once spent a week trying to get a previously published system to run and failed. That wasted my time and reduced the impact of that publication as it is impossible to extend and cite that research.

You might want or need to extend your previous research. Maybe you did some experiments at the start of your PhD but need to change some parameters to make it compatible with other experiments. Do not assume that future-you will still know all intricacies you currently know!

Good software increases your confidence in results. You will publish a paper with your name on it. The more hacky the process of obtained results, the less sure you can be that the results are actually correct. Manual steps are a source of error, minimize them!

Less work in the long run. Made an error somewhere? Just start over! If your experiments run with minimal manual work, restart is easy and CPU time is cheaper than your time. Once you run your experiment pipeline several times, additional work into automation pays off.

You have an ethical obligation to document your experiments. A paper is usually not enough to understand all details! The code used to run the experiments can be seen as the definition of your experiments. Automation is documentation!

Good software enables teamwork. Automated experiments mean that all your collaborators will also be able to do the experiments. You don't want to be the one person knowing how to run the experiments (you will have to do all the work) and don't want to be the one not knowing it (unable to do the work). Worst case: You need knowledge from several people to run the experiment, noone can reproduce the results on their own.

The motto of this blog post
Do it twice or more
Write a script that works for you
Enjoy the summer

Some basics

Hopefully, I have convinced you by now. So let's start with some basics. No need to be perfect, small steps help.

Keep source data, in progress data, and results separate. Having different directories enables you to just rm -rf in_progress results and start afresh. If you save intermediate results for long-running computations: make sure you can always distinguish between finished and in-progress results. E.g., you might have a power outage and otherwise not know which computations are complete and which have to be started again.

Add a shebang to your scripts and make them executable. Bash: #! /bin/bash, Python 3: #! /usr/bin/env python3. This shows everyone which files are intended to be run and which are not. Also, the shebang makes sure that the right interpreter is used. You do not want a bash script run by /bin/sh and get weird results.

Add a comment header with author, license, documentation. Others know who to contact, know how they are allowed to use and distribute your code and what it does. An example from my code:

#!/usr/bin/env python3
# author: Arne Köhn <arne@chark.eu>
# License: Apache 2.0

# reads an swc file given from the command line and prints the tokens
# one on each line, and with empty lines for sentence boundaries.

This gives a potential user at least a first idea of what to expect. Add a README to your project to give a high-level overview.

Use logging frameworks. Do not simply print to stdout or stderr, use a logging framework to distinguish between debug and serious log messages. Do not debug with print statements, use a logger for that! That way, you can simply keep the logging code but disable the specific sub-logger. Java: use log4j2, Python: logging.log. Log to meaningfully named log files (e.g. Y-M-D-experimentname.log) so you can find the logs if you find problems with an experiment later on.

Version control

Use a version control system! Commit early, commit often. Git commits: first line 50 chars, second line empty, detailed log message afterwards. Example:

Add script that handles installer building

That script will automatically fetch the correct versions of files
from the build.gradle file and run install4j if it is in the path.

Also don't use master-SNAMPSHOT but a fixed version of each
dependency.  Otherwise we end up with non-reproducible installer
builds.

This way, tools will be able to display your commit message correctly.

Rewrite history. Committing often means you might want to commit incomplete code or you find errors later on and fix those. Merge those commits into one and reword the commit history to make it more understandable by others. This also lets you commit with nonsense commit messages locally without embarrassment. Simply perform a git rebase -i to merge the commits later on. Note: don't rewrite history you already pushed and other might rely on.

Use rebase when pulling.git pull --rebase will move your local commits on top of the ones already in the upstream repository. A sequential history without merge commits is much easier to read.

Tag important commits.git tag -a [tagname] creates a new tag to find this specific revision. Ran experiments with a specific version? git tag -a acl2019 and people (including you!) will be able to find it.

Use GitHub issues and wiki. You can subscribe to a repository and will get mails for all issues, essentially using it a a mailing list with automatic archival. The wiki can be cloned with git and used as centralized offline documentation. Also works with other hosting systems such as gitlab.

Java

Use a build system. Eclipse projects do not count! Gradle is easy to understand and use. It might get more complicated if your project gets complicated, but it will definitely save you time.

A gradle file for a standard project looks like this, not much work for the time you save:

Plugins {id 'pl.allegro.tech.build.axion-release' version '1.10.1' }
repositories { maven {url 'https://jitpack.io'} }
dependencies { api 'com.github.coli-saar:basics:2909d4b20d9a9cb47ef'
}

group = 'com.github.coli-saar'
version = scmVersion.version
description = 'alto'

The plugin even takes the correct version number from your git tag when creating artifacts.

Make libraries available as a maven dependency. This is trivial with jitpack: It will package libraries for you to use on demand from git repositories. This is already shown in the example above: once jitpack is added as a repository, you can add dependencies from any git repository by simply depending on com.github.USER:REPO:VERSION with VERSION being a git revision or a git tag. jitpack will take care of the rest.

With the plugin described above, a release can be done by simply tagging a version. Only with this ease of use will you actually create releases more than once in a blue moon. In old projects, it sometimes took me half a day to do a proper release because I couldn’t remember all the steps involved. If it is hard to do a release, it will just be dropped because it is not a priority for your research.

Python

Don't write top-level code. Use functions to subdivide your code, run your main function using this at the end:

if __name__ == "__main__":
    main()

This way, you can load your code into an interactive python shell (e.g. using ipython) and play with it. It also becomes possible to load the code as a library into another program.

Parse your arguments with argparse. This makes your program more robust and provides automatic documentation for people wanting to run your code. Alternative: docopt

Declare the packages your program requires. write them in requirements.txt so people can run pip3 install -r requirements.txt to obtain all libraries.

Use proper indentation and comments. Indent with 2 or 4 space, NO TABS! (This is not just my opinion, it’s in the official guidelines) Use docstrings for comments:

def frobnicate:
    """Computes the inner square sum of an infinite derived integral."""
    return 0

Your co-authors and all tools will thank you.

Instruct others how to use your libraries. Similar to jitpack for Java, you can use git URLS with pip: pip3 install git+https://example.com/usr/foo.git@TAG Write this in your README so others can use your code (and use it to include foreign code).

bash

Perform proper error checking. exit on error and force all variables to be initialized. Add these two lines to the top of your bash scripts:

# exit on error
set -e
# exit on use of an uninitialized variable
set -u

Otherwise, the script will continue even when a command fails and interpret uninitialized variables as simply being empty. Not good if you have a typo in a variable name!

Avoid absolute directories. Want to run a program from your bash script? Either change the working directory to the directory your bash script is in or construct the path to the other program by hand:

# obtain the directory the bash script is stored in
DIR=$(cd $(dirname $0); pwd)

# Now either run the command by constructing the path:
$DIR/frobnicate.py
# or change the directory if that is more convenient:
cd $DIR
./frobnicate.py

Create output directories if they don't exist. Running mkdir foo will fail if foo already exists, creating the directories by hand is burdensome. Use mkdir -p foo to create the directory, it will not fail if foo exists. Additional bonus: You can create several directories at once: mkdir -p foo/bar/baz will create all nested directories at once.

Use [[ instead of [ for if statements.[[ is an internal bash mechanism, [ is an external program, which can behave weird: if [ $foo == "" ] can never become true because an empty variable $foo would lead to a syntax error(!) – simply use [[.

data formats

Use a common data format. This saves you from writing your own parser for your data format. CSV is supported everywhere. XML can be read and generated by every programming language. json is well defined. “Excel” is not a good data format.

Keep data as rich as (reasonably) possible. If you throw away information you don’t need at the moment, you (or someone else) might regret that because this additional information would have enabled interesting research. For example, the Spoken Wikipedia Corpora contain information about sub-word alignments, normalization, section structure (headings, nesting) etc. None of the research based on the SWC need all of it and it was a lot of work for us, but it enabled research we did not think of1 when creating the resource. What does this have to do with data formats? Your data format has to handle all your annotations. Plain-text with timestamps just was not powerful enough for what we wanted to annotate, even though it is standard for speech recognition, for example.

XML is actually pretty good. Young people these days2 like to dislike XML due to its verbosity. Yes, it is verbose. But it has redeeming qualities:

Continuous integration

Continuous integration automatically builds and tests your software whenever you push new commits or create a pull request. It makes sure that the software actually still works in a clean environment and not just your laptop. Integration with GitHub or Gitlab is pretty easy, I will show an example for GitHub with TravisCI, the de-facto default for CI with GitHub.

All you have to do is to add a .travis.yml file into the root of your project which contains the necessary information and enable CI by logging into travis using your GitHub credentials and selecting the repositories you want to enable CI for.

The configuration is quite simple: If you use a standard build system (see above), you only need to state the language. For example, language: java is enough configuration to get going with a gradle project.

However, you can do more: Travis can create releases based on tags for you:

language: java
deploy:
  provider: releases
  api_key:
    Secure: [omitted]
  skip_cleanup: true
  file_glob: true
  file: build/libs/alto-*-all.jar
  on:
    tags: true

will create a new release for each tag (last lines) by building the project as always, performing no cleanup (important!) and then pushing a specific jar into a new GitHub release (provider: releases means GitHub release). We build alto that way and all its releases are automatically created.

The only slightly non-trivial part is the api key. This is created by the travis command line tool.

Automate experiment evaluation

Automate table creation. Use python, sed, awk, bash (or whatever else) to automatically gather your results. Every manual step is a possibility for copy-paste errors.

Make your output compatible to a plotting toolkit. Same as above: If your output can be read by gnuplot (or matplotlib or ggplot), you can automate your pipeline. Export to png and tikz so you can have a look at your data (png) but also have high quality vector graphics in your paper (tikz). tikz is also great because it enables manual post processing if you e.g. want to do something not obtainable through your plotting toolkit such as forcing numeral font for the y-axis.

When you are done

Clean up your pipeline. Often, an experiments grows and changes over time. Once it is done, make it automatically download needed data, automate leftover manual steps. Remember: automation is documentation – people can read the code to see what you did.

Make your data and code available. Remember: Bad code is better than no code – do not wait because you want to clean up your code some time in the future.

That’s it

Thanks for reading! I hope some points were of interest to you; if there is something not described clearly enough (or plain wrong) or I missed something important: Do let me know! I will update this post if new points come up.

Updates

  • 2019-07-18: based on my best critic Christine.
  1. Unfortunately, also an example of papers not citing the resources they use
  2. We have students who were born after Without me was released!
Comments

Why we chose XML for the SWC annotations

Posted on Wed 29 November 2017 in misc • Tagged with corpora

I was asked why we use XML instead of json for the Spoken Wikipedia Corpora:

As mentioned, we actually started with json. The first version of the SWC was actually annotated using json and I converted that to XML.

The original json more or less looked like this:

{ "sentences_starts": [0,10,46,72],
  "words": [
      {"token" : "hello", "start": 50, "end": 370},
      ["more tokens here"]
    ]
}

To obtain the second sentence, you needed to get sentence_starts[1] and sentence_starts[2], then obtain the sub-list of words defined by those bounds. You can notice the downside of data normalization.

The XML looked like this:

<sentence>
  <token start="50" end="370">hello</token>
  [more tokens here]
</sentencene>

You can see that it is much more succinct. To obtain the second sentence, just do an xpath query: sentence[1] (more about using xpath at the bottom of this post).

But now we have much more structure, as you can see in our RelaxNG definition (have a look, it's easy to read!). We have

  • sections which can be nested
  • parts which were ignored during the alignment
  • sentences containing tokens, containting normalizations, containing phonemes

All in all, the annotation is a fairly elaborate typed tree. json is actually less succinct if you want to represent such data because there are no types. Try to represent <s><t>foo</t> <t>bar</t></s> in json:

{ "type": "s"
  "elems": [{"type": "t", "elems": ["foo"]},
           " ",
           {"type": "t", "elems": ["bar"]}
           ]
}

The distinction between data and annotation is not clear in json: in XML, everything that is not an XML tag is character data. To get the original text, just strip all XML tags. In json you would somehow have to externally define what the original data is and what the annotation is. This is important because we keep character-by-character correspondence to the original data. This is a very cool feature because you can cross-reference our annotations with the html markup to e.g. look at the impact of <b> tags on pronunciation.

validating your annotations

Last but not least, XML is much easier to validate (and given the complexity of our annotation, that was necessary!). The RelaxNG definition is human readable (so people can learn about the schema) and used for validation at the same time. Having a schema definition helped quite a bit because it was a central document where we could collaborate about the annotation schema. The automatic validation helped to catch malformed output – which happened more than once and was usually based on some edge cases. Without the validation, we wouldn’t have caught (and corrected) them. To my knowledge, there are no good json validators that check the structure and not just whether it is valid json.Update:

I had a look at json-schema and will give you a short comparison. In our annotation, a section has a title and content. The title is a list of tokens, some of which might be ignored. The content of a section can contain sentences, paragraphs, subsections or ignored elements.

This i s how the RelaxNG definition for that part looks like:

## A section contains a title and content. Sections are nested,
## e.g. h3 sections are stored in the content of the parent h2
## section.
Section = element section {
    attribute level {xsd:positiveInteger},
    element sectiontitle { MAUSINFO?, (T | element ignored {(T)*})* },
    element sectioncontent { (S|P|Section|Ignored)* }
}

I think it is fairly easy to read if you are acquainted with standard EBNF notation – | is an or, * denotes repetition and so on.

Compare my attempt at using json-schema:

{ "section": 
  { "type": "object",
    "required": ["elname", "elems"]
    "properties": 
      { "elname": {"type": "string",
                   "pattern": "^section$"}
          "elems": {"type" : "array"
                     ["and all the interesting parts are still missing"]
                   }
      }
  }
}

That part only defines that I want to have a dictionary with elname=section and it needs to have an array for the subelements. I just gave up after a few minutes :-)

Working with XML annotations

Say you want to work with an XML annotated corpus. The easiest way to do that is XPath.

You don't care about our fancy structure annotation and just want to work on the sentences in SWC? Use this XPath selector: //s. // means descendant-or-self and s is just the element type you are interested in, i.e. you select all sentence structures that are somewhere under the root node. To give you an example in python:

import lxml.etree as ET
root = ET.parse("aligned.swc")
sentences = root.xpath("//s")

You can attach predicates in square brackets. count(t)>10 only selects sentences that have more than ten tokens:

sentences_longer_ten = root.xpath("//s[count(t)>10]")

You are only interested in long sections? Let's get the sections with more than 1k tokens! Note the .//, the leading dot means “start descending from the current node”, with just an //, you would count from the root node and not from each section.

long_sections = len(root.xpath("//section[count(.//t)>1000]"))

You want to get the number of words (i.e. tokens that have a normalization) which were not aligned? It’s easy: select all tokens with an n element as child but without an n element that has a start tag:

number_unaligned_words = root.xpath('count(//t[n][not(n[@start])])')

Note that we used count() to get a number instead of a list of elements. The aligned words have n subnodes but no n without a start attribute (there is no universal quantifier in xpath, you have to the equivalent not-exist):

aligned_words = root.xpath('//t[n][not(n[not(@start)])]')

You want to know the difference between start times for phoneme-based and word-based alignments? Here you are!

phon-diffs = [n.xpath("sum(./ph[1]/@start)")
              - int(n.attrib["start"]) 
              for n in root.xpath("//n[ph and @start]")]

We first obtain the normalizations that have word- and phoneme-based alignments (//n[ph and @start]) and then use list comprehension to compute the differences between the word-based alignments (n.attrib["start"]) and the start of the first phoneme (n.xpath("sum(./ph[1]/@start)")) – the sum() is just a hack to obtain an int instead of a string…

And that’s it! In my opinion, it’s easier than working with deeply nested json data structures. Questions, comments? send me a mail.

Comments

GamersGlobal Comment Corpus released

Posted on Sat 18 November 2017 in nlp • Tagged with corpus

Today I'm releasing the GamersGlobal comment corpus. GamersGlobal is a German computer gaming site (and my favorite one!) with a fairly active comment section below each article. This corpus contains all comments by the 20 most active users up to November 2016.

I use this corpus for teaching, mainly author attribution using bayes classifiers and language modeling. It's just more fun to use interesting comments than some news text from years ago. This is also the reason for the lack of additional meta data such as threading information: It was easier to obtain this way and I'm not doing research on it.

GamersGlobal has all user-generated content licensed under a Creative Commons share-alike license, making it ideal for corpus creation.

The corpus archive contains:

  • the original csv table with timestamps and author information
  • comments sorted by author (untokenized)
  • comments sorted by author (tokenized)
  • a script to create a train / test set with the author names in the test set hidden (this is what I hand out to my students)

You can download it here: ggcc-1.0.tar.xz (40mb, md5sum: b4adb108bc5385ee9a2caefdf8db018e).

Some statistics: - 202,561 comments - 10,376,599 characters - more statistics are left as an exercise to the reader :-)

If you are interested in corpora, be sure to also check out the Hamburg Dependency Treebank and the Spoken Wikipedia Corpora!

Comments

abgaben.el: assignment correction with emacs

Posted on Mon 13 November 2017 in software • Tagged with emacs, teaching

Part of my job at the university is teaching and that entails correcting assignments. In the old days, I would receive the assignments by email, print them, write comments in the margins, give points for the assignments and hand them back. This approach has two downsides:

  • assignments are done by groups of 2-3 students but only one would have my commented version
  • I wouldn't have my own comments afterwards.

Therefore I switched to digital comments on the pdf. I would then send the annotated pdf to the students. Because it took a lot of time (~30min every week) to find the correct email, send the emails etc, I wrote a small package to help with that: abgaben.el

I assume that you use mu4e for your emails. I ususally have several classes every semester – this semester I have one on monday (“montag”) and one on wednesday (“mittwoch”).

My workflow is as follows:

When I get an email, I save the assignment using the attachment action provided by abgaben.el. It asks for the group (montag/mittwoch in my case) and the week (01 in this example). Both questions remember your answer and will use it as a default for the next invocation. It then saves the attachment to the correct directory (abgaben-root-folder/montag/01/) and will create a new entry in your org mode file (abgaben-org-file, which needs to have a generic heading as well as your group headings in place), linking the assignment and the email:

You get the attachment action by adding something like this:

(add-to-list 'mu4e-view-attachment-actions
    '("gsave assignment" . abgaben-capture-submission) t)

The first character of the string is the shortcut. In this case, you need to press A gA for mu4e attachment actions and then g to invoke abgaben-save-abgabe.

Then you can annotate the assignment with pdf-tools or whatever program you like. You could also sync the files to your tablet and annotate them there. Afterwards, call abgaben-export-pdf-annot-to-org to export your annotations into the org file. That command will also check for points and create a new subheading listing all points as well as a sum. (Because I batch process the assignments, I usually only have to press M-x M-p <RET>…)

You can then send the annotated pdf to your students by calling abgaben-prepare-reply. The function will store a reply with the exported annotations, the points overview and the annotated pdf as attachment in your kill ring and open the original email by your students. Press R to reply, C-y to insert your reply, modify if needed, and send the email. You are done!

(For some reason, I re-exported the annotations in this video, but it is a really cool feature worth to be seen twice!)

Now you have an org file with all your annotations exported (and ready to reuse if several groups make the same mistake…), the points neatly summarized and all relevant data linked.

You can customize the relevant aspects of abgaben.el by M-x customize-group <RET> abgaben <RET>.

The package might soon be available via melpa, but for now you'll have to download it and install it via package-install-file. is availble via melpa.

If you end up using this package or parts of it, drop me an email!

Update Dec 2017: The package now also supports archives, e.g. for source code submission. These are extracted into a new folder and that folder is linked in the org file.

Comments

ESSLLI Course on Incremental NLP

Posted on Mon 03 October 2016 in nlp

Timo and I held a course on incremental processing at ESSLLI 2016. If you have a look at (most of) our publications, you will see that Timo works on incremental speech processing and I on incremental Text processing. The course was about incremental NLP in general and I hope we were successful in generating interest in incremental processing.

The slides are online (scroll to the bottom), but may not be sufficient to understand everything we actually said (this is as slides should be, in my opinion).

I stayed in Terlan which is fifteen minutes by train from Bozen and is quite lovely. Much quieter than Bolzano and one can go on hikes directly. Terlan has lots of great wine. Nearly all Terlan wine is produced by a co-operative founded in 1893.

Comments

GPS track visualization for videos

Posted on Mon 03 October 2016 in misc

We recently went for a ride at the very nice Alsterquellgebiet just north of Hamburg. We had a camera mounted and from time to time, I shot a short video.

Back home I wanted to visualize where we were for each video to make a short clip using kdenlive. The result is a small python program which will create images like these:

Track visualization on OSM

Given a gpx file and a set of other files, it downloads an OSM map for the region, draws the track, and for every file determines where it was shot (based on the time stamp as my files sadly have no usable meta-data). It then produces an image as above for each file.

You can download the script here: trackviz.py

Make sure to properly attribute OpenStreetMap if you distribute these images! Since they are downloaded directly from osm.org, they are licensed under a Creative Commons Attribution-ShareAlike 2.0 license.

Comments

Evaluating Embeddings using Syntax-based Classification Tasks as a Proxy for Parser Performance

Posted on Sun 19 June 2016 in Publications

My paper about the correlation between syneval and parsing performance has been accepted at RepEval 2016. You can find code, data etc. here. Looking forward to Berlin (which is a 1:30h train ride from Hamburg).

Comments

Mining the Spoken Wikipedia for Speech Data and Beyond

Posted on Mon 30 May 2016 in Publications • Tagged with corpus

Our paper Mining the Spoken Wikipedia for Speech Data and Beyond has been accepted at LREC. Timo presented it and the reception seemed to be rather good. You can find our paper about hours and hours of time-aligned speech data generated from the Spoken Wikipedia at the Spoken Wikipedia Corpora website. There is about 200 hours of aligned data for German alone!

Of course, all data is available under CC BY-SA-3.0. :-)

Comments

What’s in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation

Posted on Tue 15 September 2015 in Publications

I'm presenting my paper What’s in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation at EMNLP. You can have a look at the data, code, and examples.

Hopefully, the EMNLP video recordings will be online at some point. As of now (2016-04), they are not.

Comments

My Bachelor thesis

Posted on Thu 31 December 2009 in Publications

My bachelor thesis is in German, you can find it at the open access repository of our department: Inkrementelle Part-of-Speech Tagger

I also made an overview in English with the relevant results.

Comments