Today started with more user studies, and ended with more data analysis, just like all the other days. We got 5 more users today, bringing out total up to 22. I then spent a good amount of time making the comparison tools so we could compare excerpts of the texts at specific indices. After working on this for a while, making it print all nicely and such, I realized there was a bug in my SingleDataAnalysis, but it was a simple fix, if not precise. Basically, when I was incrementing tempBegIndex in the partial word finder, I was only incrementing by the length of the partial word, not considering any non-word characters in between successive words. To fix it, I just added 1 to the increment, which should work 99% of the time, except when there are more than 1 non word characters interspersed between words. After I got back to data, I noticed that qualitatively our results didn’t look super good, because the standard deviations of the ZScores were very high. Prof. Medero offered a good piece of advice, to only use the data from participants with a variation in CPS above a certain threshold, so tomorrow I will work on that.