As it was the first day of work, we mostly became acquainted with the relevant literature of our topic, and then moved into some basic tutorials for the programming language we will be coding in. For the first few hours of the day we read through Prof. Medero’s doctoral thesis, paying specific attention to the sections concerning how to characterize word difficulty when able to record sound. As our work will not involve the subjects reading aloud, we will need to adapt her methods to a different style, either via an implementation of a rudimentary eye tracker, or the gyroscopic capabilities of the iPad, though both seem like quite daunting tasks at the moment. I did, however, find an API for eye tracking in IOS, so maybe it will be of use to us, though it does cost money (http://www.visagetechnologies.com/new-release-of-visagesdk-includes-eye-tracking/).
I also read through another of Prof. Medero’s papers on the best ways of simplifying text by moving sentence by sentence through a passage, changing via omission, splitting or expansion, and then calculating the lowest cosine distance. I briefly explored the paper which had used a far more sophisticated eye tracking method than we have at our disposal. I also tried to intersperse my review of the relevant literature with occasional breaks to learn the Swift programming language, as it will be quite necessary in our work. So far I coded a rudimentary, function-based tip calculator, and have begun work on developing OS X apps. My plan over the next week is to learn how to adapt that same tip calculator into a fully realized app, with button and text input control. I think if I was able to learn to do this rather quickly it would portend well for our capabilities to make a gyroscopic app within a few short weeks.