Woohoo for data day. It started out well and then halfway through the afternoon everything sort of devolved a little bit. I started out by going through and making all the data structures, and cleaning up my functions for finding master words at specific indices. This all went smoothly. I took a break to find the standard deviation of the accelerometer in the iPad so that we could find the average uncertainty, and so I made a small app to test that and ran a few trials. After lunch I came back and kept working, but then in an attempt to get data on all the little words in a text, not just the entering, exiting and longest word per frame, I remade the infrastructure to now consider all partial words in frame. However I then was diverted because turns out the regex module for python is terrible to work with, and its splitter function is not helpful at all, so I just made my own splitter function and spent a while bug testing it. I then moved onto mapping word dictionaries based, but then realized that my master word finder had a bug. It is something to do with the indices it slices at, but I didn’t have time to test, I just left myself detailed notes for tomorrow on what is going on. Tomorrow I will keep working and bug fixing until the data comes out right, then go back through and add an index counter to the word -> cps mapping so that words aren’t considered the same when read at different locations in the text, and then work on mapping between different users. We should start user studies on Friday.