July 14 (Week 4) – Maury

Today, I spent the majority of today setting up my Knuth work environment, now that I have the space that I need on the server. I got SRILM and Stanford CoreNLP set up, and then I began sentence parsing of another corpus: Kauchak’s Wikipedia 2.0 (document-aligned) corpus. Unfortunately, Stanford CoreNLP’s parser isn’t the fastest. I computed that, on average, the parse of one sentence takes ~0.6 seconds. This means that, with the size of the corpus of ~3mil sentences, the parsing is going to take around 3 weeks to complete. Granted, though, this estimate is with Knuth using only one core. Though I’m sure that I’d achieve a significant speed-up if I parallelized the parsing, I figured it wasn’t worth it for now. If I end up having to continue parsing corpora, then I’ll put in the time it’d take to parallelize (which I do not know how to do in Java). But for now, I’ll leave it running.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s