We spent the morning working on the presentation a little more, making sure the presentation was good and rehearsing it for time. I think the presentation went quite well, and we got a lot of questions after so I think people were genuinely interested in our work. We took an extended lunch to chat with Prof. Medero, and upon getting back I got back to work on my new Data Analysis tools. Left over from yesterday the new OO approach, which is far more readable and understandable was taking up to 15x as long to complete than the massive dictionary approach. I tried a couple approaches to fix this, the most successful initially was a combination of dictionary and OO. When I iterating through the words, and needing to assign to a value I just created a temporary dictionary and checked if the word was in its keys and went from there. By optimizing I was able to reduce the time per ID to about 1.2 seconds, according to Sublime Text, which is about 0.4 seconds worse than the original approach. When implementing the multi data analyzer though, these times because pretty unbearable, initially taking 37 seconds to go through 22 IDs and do additional analysis on them. However, I eventually figured out how to massively optimize this. Basically instead of doing and if-else case to check if a word is in the dictionary, I just do a try-except clause, where the try is for if the key is present and the except is “except KeyError”. This means in 90% of the cases python doesn’t go down that second route and it takes far less time. In fact the new time is 6.1 seconds to analyze all 22 texts, about 2.5x better than even the dictionary approach. Tomorrow, I plan to go through everything and comment and then go back and comment some remaining functions on the app.