Today, I spent the majority of today setting up my Knuth work environment, now that I have the space that I need on the server. I got SRILM and Stanford CoreNLP set up, and then I began sentence parsing of another corpus: Kauchak’s Wikipedia 2.0 (document-aligned) corpus. Unfortunately, Stanford CoreNLP’s parser isn’t the fastest. I computed that, on average, the parse of one sentence takes ~0.6 seconds. This means that, with the size of the corpus of ~3mil sentences, the parsing is going to take around 3 weeks to complete. Granted, though, this estimate is with Knuth using only one core. Though I’m sure that I’d achieve a significant speed-up if I parallelized the parsing, I figured it wasn’t worth it for now. If I end up having to continue parsing corpora, then I’ll put in the time it’d take to parallelize (which I do not know how to do in Java). But for now, I’ll leave it running.