Today was quite productive, though thankfully not as much of a data slog as yesterday. I started by spending an hour and a half meticulously commenting my code, as I hadn’t done any of that yesterday. I then switched over to a multiDataParse that takes multiple IDs in as input, and outputs a master word mapping. It actually was far far less troublesome than the work yesterday, because it rested on the back of yesterday’s work I suppose. Once I had that working I switched to a new data approach by finding the total time a word is on screen. After I got this working, I thought more about the problem, and realized that time on screen may actually be a far better indicator than the speed at a given frame, because I think it is easier and more accurate to normalize the time on screen by the word length, so that smaller words do not get “caught” with bigger words hardness. I spent a while formulating a model and equation for norming the time, which worked, but requires the assumption of constant time. This is of course not accurate, as in reality there will be differing speeds, and so the norming should be different if words move more quickly compared to each other. I struggled trying to figure out a norming equation, and in the end left it for next week when I have time to think about it. It took up a while of my time, so my next order of business is to find the zscore of the time on screen and add that to the master data dictionary, and from there I will work on adding graphing capabilities. We should start user testing early next week which means I need this parser up and running soon, so I’ll have to hop to.