Today I presented everything I did over the summer. I felt that all of the presentations went great and I am very grateful to RIT as well as my mentors for this amazing experience. Here is my presentation.
I started today debugging my code for my neural network. After a healthy amount of googling and some perseverance I was able to roughly train and evaluate my model. The performance was subpar but hopefully I will be able to improve that next week. When that was over I went to the free pizza lunch provided for all of the REU summer research students and I then attended a lecture on visual perception. Although this was not very related to my project I still found it to be a valuable use of my time. It was extremely mind-blowing to realize that our eyes can only see details in a very small field of vision. Our peripheral vision is a lot worse than I previously thought. One of the coolest visual demos was these two tables shown below. Believe it or not the tabletops are exactly the same size and shape. Human visual perception is fascinating and I plan to continue attending the lectures throughout the summer. Source: http://www.optical-illusionist.com/illusions/table-size-ill...
While I was waiting for my experiments to run on the server I spent the day presenting in front of various members of kLab. I got some very good advice from Dr. Kanan in the morning which I used to update my presentation. Later in the day with the help of Tyler, Dr. Kemker and Kushal I was able to get a presentation that I was happy with. I plan to update the presentation with the results from the experiments I ran.
Today I watched a Stanford lecture on loss in machine learning and cost optimization function. Loss functions are a crucial part of neural networks. I learned the basics of feed forward neural networks from a helpful web series: ( https://www.youtube.com/watch?v=ZzWaow1Rvho&list=PLxt59R_fWVzT9bDxA76AHm3ig0Gg9S3So ) . After watching the YouTube series I switched to the Stanford lectures to learn more on loss functions. The purpose of a loss function is to update your neural networks classifying function. By giving your predicted result and comparing it to the actual result you are able to tell where your model went wrong and you can then update it to improve over the next iteration. I learned about hinge loss and the squared loss function which are two different ways to calculate your error. Using this knowledge I was able to begin to explore the soft max function. I don't fully understand the math behind it today, and I will continue to look into it tomorrow. The image below i...
Comments
Post a Comment