Today I presented everything I did over the summer. I felt that all of the presentations went great and I am very grateful to RIT as well as my mentors for this amazing experience. Here is my presentation.
Most of today was spent working on my program. Despite some initial struggles with importing data and feeding that data to the network I was able to extract a feature vector by the end of the day. A feature vector is a list of values given by the pre-trained neural network that can be used to classify images. Using this feature vector I was able to classify 10 images with 100% accuracy. Although with a larger data set that accuracy may have fallen I am still extremely impressed with the application of a pre-trained neural network. Source: https://brilliant.org/wiki/feature-vector/
At the start of today I watched 2 of Stanford's lectures one on feed-forward neural networks and the other on convolutional neural networks. In a feed-forward network there is no connection to previous nodes. This is a great visualization of the relationship between nodes and how a previous layer provides input for a future layer. The other type of neural network is a recurrent neural network. What makes a recurrent network different is that outputs on a certain layer can act as inputs for the previous layer. I have not gone into depth on recurrent networks so my knowledge is limited. A convolutional network is not another type of network but rather a specific layer on a neural network which alters the data. One example of convolution is pooling. As demonstrated below pooling takes a filter full of values and then averages or sums them creating a new array. Pooling is useful because it reduces the size of your matrix while retaining a similar value. This allows for calculations to...
Today I began to experiment with using a pre-trained neural network in order to have a higher accuracy in testing. I used resnet-18 which is a neural net that has been trained on a data set with 14 million images. This allows resnet-18 to identify specific features like edges or corners. The detection of these edges is called feature identification and it is a large advantage of using a pre-trained neural network. Using resnet-18 also reduces training time because the network already has a good idea on how to tell apart images. I plan to test my pre-trained neural net on the caltech-101 data set later this week. Source: http://web.eecs.umich.edu/~honglak/cacm2011-researchHighlights-convDBN.pdf
Comments
Post a Comment