7/23/15 – Adam Shaw

This morning I did nothing, but write comments for the iPad app and the last of the data analysis functions. Everything is done though and so I can theoretically add it to the github now. However, just after finishing I realized we still need data for our results section so I tried to figure out what would be a good way to display that data. I think comparing the easy vs. hard pairs for the three text types would be best, as it gives us 3 distinct checks. I worked on the data analysis section of the paper for a little bit, and I’ll edit it tomorrow. With only a day left I made these goals for myself for tomorrow: 1. put everything on github, 2. finish my parts of the paper, 3. look over the poster and edit, 4. get relevant metrics for the poster/paper, 5. make a better icon for the app, as that is very very important. Tomorrow should be pretty stressful, but its the last one!

7/22/15 – Adam Shaw

I started the morning by finishing up the graphing features of the multi data analyzer so that it had graphing by range functionality. I then went back through and adjusted how we took the averages and combined data, so that it would be easy to instead take medians. I looked up what the median equivalent of standard deviation was and found out that it was median absolute deviation (or MAD). I implemented this metric, and then made some graphs of medians with MAD error bars instead of averages with standard deviation error bars. It still wasn’t perfect, but it looked better, and one plot in particular looked excellent. I then started commenting through EVERYTHING so that any future researcher will know how the data tools worked. Luckily, because I had already completely changed it to an OO approach the commenting was not that bad as I didn’t have to explain any magic numbers. Vidushi then asked me to work on making the graphs more readable for the poster so I did that for 30 minutes, and gave her our two best graphs which looked pretty good and bold. The poster is looking excellent, as a side note. With just two days left my remaining jobs are: 1. Comment the swift code which I never commented, 2. Publish my python code and swift code to our repository for posterity’s sake, 3. Write the data analysis sections of the poster and paper.

7/21/15 – Adam Shaw

We spent the morning working on the presentation a little more, making sure the presentation was good and rehearsing it for time. I think the presentation went quite well, and we got a lot of questions after so I think people were genuinely interested in our work. We took an extended lunch to chat with Prof. Medero, and upon getting back I got back to work on my new Data Analysis tools. Left over from yesterday the new OO approach, which is far more readable and understandable was taking up to 15x as long to complete than the massive dictionary approach. I tried a couple approaches to fix this, the most successful initially was a combination of dictionary and OO. When I iterating through the words, and needing to assign to a value I just created a temporary dictionary and checked if the word was in its keys and went from there. By optimizing I was able to reduce the time per ID to about 1.2 seconds, according to Sublime Text, which is about 0.4 seconds worse than the original approach. When implementing the multi data analyzer though, these times because pretty unbearable, initially taking 37 seconds to go through 22 IDs and do additional analysis on them. However, I eventually figured out how to massively optimize this. Basically instead of doing and if-else case to check if a word is in the dictionary, I just do a try-except clause, where the try is for if the key is present and the except is “except KeyError”. This means in 90% of the cases python doesn’t go down that second route and it takes far less time. In fact the new time is 6.1 seconds to analyze all 22 texts, about 2.5x better than even the dictionary approach. Tomorrow, I plan to go through everything and comment and then go back and comment some remaining functions on the app.

7/20/15 – Adam Shaw

And the last week begins! Vidushi and I came in a little late this morning because we were helping with the Mudd Discovery Day, but after arriving we first started working on our presentation for tomorrow. Vidushi had already worked on it previously, and has done a great job, so there wasn’t really a whole lot to change, we just needed to update some Data Analysis slides, and split up the speaking parts. We went into the conference room and read it out loud, and it was 12 minutes without the results section, which we have now added. That seems to be perfect timing, so I think everything should go smoothly tomorrow.

I then got to work on simplifying the data analysis process, as I am worried that it is too structure dependent, instead of OO dependent. I made classes for words, and textSources and converted everything from being massive dictionaries to arrays of custom classes with custom variables. I translated the whole singleDataAnalyzer over to this new infrastructure, but upon testing a few minutes ago it gets massively slowed down in one particular method, where I convert partial words to my new word class, so I’m going to try and figure out why that is and fix it tomorrow, and then convert the multiDataAnalyzer to the new system. Then I will try to do some new graphs, as well as graph medians, instead of just means.

7/17/15 – Adam Shaw

I kept working on data analysis today. Vidushi needed a set of processed data so she could do normalization tests on them, so I gave that to her. I then started working on displaying the 12 graphs, 4 for each text type, with 3 subplots per graph. It took a little while, because matplotlib is this very weird mix of object oriented and just functional, but once I figured it out it made sense. The results were not gratifying though, because it showed what was essentially a random data set concerning whether the A or B was harder. I tried to limit the number of IDs used in the data by only allowing those with an average cps greater than the mean variation across all texts. This limited our data pool to 10 and didn’t really reveal a lot more interesting data. Vidushi said that all the data sets were theoretically normally distributed, which doesn’t help a whole lot. The last thing I did was tweaking my function to graph the word’s cps and the cpss of the preceding 5 words and the succeeding 5 words to get a sense for trends. On monday we will decide which graphs to use and prepare the presentation more.

7/16/15 – Adam Shaw

Today started with more user studies, and ended with more data analysis, just like all the other days. We got 5 more users today, bringing out total up to 22. I then spent a good amount of time making the comparison tools so we could compare excerpts of the texts at specific indices. After working on this for a while, making it print all nicely and such, I realized there was a bug in my SingleDataAnalysis, but it was a simple fix, if not precise. Basically, when I was incrementing tempBegIndex in the partial word finder, I was only incrementing by the length of the partial word, not considering any non-word characters in between successive words. To fix it, I just added 1 to the increment, which should work 99% of the time, except when there are more than 1 non word characters interspersed between words. After I got back to data, I noticed that qualitatively our results didn’t look super good, because the standard deviations of the ZScores were very high. Prof. Medero offered a good piece of advice, to only use the data from participants with a variation in CPS above a certain threshold, so tomorrow I will work on that.

7/15/15 – Adam Shaw

We started the day with more user testing, but unfortunately two of our participants dropped through. We only got 7 more people today, and so we have far fewer people than we did last study (17 vs 27). We are going to try and get more participants for the next few days, but don’t know how well that will go. We got a few people from the open house, but overall it was all CS people, so we will just have to try and recruit harder. After the user studies I worked on the data analysis tools, allowing us to now compare in an index of a specific text, which means we can do direct word to word comparisons. This required me to spend an hour and a half sorting a dictionary (which contained a nested dictionary), but it eventually all worked, and early appraisals look positive (qualitatively significant difference between trouble words in the A and B version of texts). Tomorrow I will work on developing a quantitative version of this difference. I will first work on making a new graphing function while Vidushi looks up what statistical test to use, and once she finds one that she likes I will implement it, and hopefully we can see some real significance.