Today I presented everything I did over the summer. I felt that all of the presentations went great and I am very grateful to RIT as well as my mentors for this amazing experience. Here is my presentation.
I started today debugging my code for my neural network. After a healthy amount of googling and some perseverance I was able to roughly train and evaluate my model. The performance was subpar but hopefully I will be able to improve that next week. When that was over I went to the free pizza lunch provided for all of the REU summer research students and I then attended a lecture on visual perception. Although this was not very related to my project I still found it to be a valuable use of my time. It was extremely mind-blowing to realize that our eyes can only see details in a very small field of vision. Our peripheral vision is a lot worse than I previously thought. One of the coolest visual demos was these two tables shown below. Believe it or not the tabletops are exactly the same size and shape. Human visual perception is fascinating and I plan to continue attending the lectures throughout the summer. Source: http://www.optical-illusionist.com/illusions/table-size-ill...
Most of today was spent working on my program. Despite some initial struggles with importing data and feeding that data to the network I was able to extract a feature vector by the end of the day. A feature vector is a list of values given by the pre-trained neural network that can be used to classify images. Using this feature vector I was able to classify 10 images with 100% accuracy. Although with a larger data set that accuracy may have fallen I am still extremely impressed with the application of a pre-trained neural network. Source: https://brilliant.org/wiki/feature-vector/
Today I began to experiment with using a pre-trained neural network in order to have a higher accuracy in testing. I used resnet-18 which is a neural net that has been trained on a data set with 14 million images. This allows resnet-18 to identify specific features like edges or corners. The detection of these edges is called feature identification and it is a large advantage of using a pre-trained neural network. Using resnet-18 also reduces training time because the network already has a good idea on how to tell apart images. I plan to test my pre-trained neural net on the caltech-101 data set later this week. Source: http://web.eecs.umich.edu/~honglak/cacm2011-researchHighlights-convDBN.pdf
Comments
Post a Comment