I’m going to be perfectly honest, there won’t be a lot to say about today, because it was literally 6 and a half hours of data work. I entered with my data model from yesterday which was near to complete, and at 4:48, I finished rooting out all the bugs and so the singleDataAnalyzer is now complete, barring ideological changes concerning how we should handle it. Strings are the devil, this I am convinced of. I’ll do a quick list of things I fixed today: early moment recording of rapid backscrolling by app not associated with human input being associating with human input, calculating z-scores from overall average instead of average of averages without taking into account weightings, given a character index, finding the index of the word which contains that character within the larger master string, parsing truncated partial words by their length in comparison to overall master word length, and general bug fixing. Writing it out like that, it doesn’t look like as draining as it felt to fix, to be a little overly dramatic, but man, twas a slog of a day. No matter, now it is done. The biggest thing is that everything can look perfect, and there can be one wrong test case that is caused by a minor edge case of some string indexing somewhere, and so all those little errors have to be ferreted out. Moreover, in the spirit of good coding I didn’t ever explicitly code edge cases in, I just redid the architecture whenever I found one so that it would be included and caught in the future. But it’s done, and it works.
An aside: I started out the day spending 20 minutes trying to figure out why the analyzer, in its current state would work for 98% of things, but not some seemingly random text frames. Turns out I had forgotten to tell it that ‘p’ was a letter. Yay.