Posts

Showing posts from August 5, 2018

Day 25: Running my liflelong VQA tests

I spent this morning working on my presentation. I tried to make my presentation more polished and explain computer vision in a concise way. After that I began making my lifelong programs ready to run on the server. When I tried to run the programs on the imac I am using it crashed so I needed something with more memory. After working with Tyler to convert my numpy code to Pytorch it was ready to run on the server. Pytorch is a library that runs large array calculations much quicker than numpy. I hope to extract data from my trials in order to evaluate their effectiveness.

Day 24: Working on my final presentation

I spent the majority of my day today working on my final presentation. I addressed some problems with my outline mainly the way I was over complicating things. Some of the topics of my presentation are difficult to explain but not as difficult to use. One example of this is neural networks. I was trying to go to in depth on how a neural network functions but all I really had to describe was the general use of them. I also added lots of visuals to my presentation in order to make it more interesting. I also made some minor changes to my code in an attempt to optimize it but its overall function stayed the same.

Day 23: Testing the VQA dataset on the nearest neighbor classifier.

By arranging the data from the VQA data set in a way that its distance could be computed I was able to use the VQA data set on my nearest neighbor classifier.  In order to use the VQA data I had to change it from sentence and answers into vectors. I did this by assigning a specific index to each unique word and then placing the string of indexes into a vector. As a result I was able to calculate the distance between the vectors and then plot them on a nearest neighbor classifier. Tomorrow I plan to test the varying accuracy of the data on different lifelong learning algorithms.

Day 22: Implementing new lifelong learning models

Today I implemented a reservoir sampling algorithm as well as a queue nearest neighbor program. The reservoir algorithm functions similarly to the incremental nearest neighbor program i implemented on Friday. The only difference is that it only holds on to a certain amount of data points per class and when new data points are added it removes an existing point at random. This is an effective way to update your prediction function and it uses less memory than the standard nearest neighbor function. The other program I implemented was the queue nearest neighbor program. This is similar to the reservoir program. The thing queue does differently is that it removes the oldest data point for a specific class when new data points are added. This is also an effective way to reduce memory usage. Tomorrow I plan to combine the VQA data set and these classifiers in order to test the effectiveness of lifelong learning on VQA.