This week I competed in my fourth Hackathon: PennApps! PennApps is an annual hackathon held at the University of Pennsylvania. It was the first ever college hackathon, and one of the best! This year it was online, but it was still a lot of fun!
My partner and I developed an artificial intelligence system that detects whether people are wearing masks or not from a picture. The idea is you put the app on a webcam or something outside of an establishment like a restaurant, and it detects whether the person outside is safe to come in. After 36 hours (it was online so they made it longer than the standard 24) we had something that kind of worked.
My partner's job was to develop a Jupyter Notebook program that detected whether or not the user was wearing a mask. He was able to get it working to a very high degree of accuracy! He used a vgg19 model. I then downloaded that file from Google Colab as a python file, and ran it locally. It produced the model, and a graph displaying its accuracy.
I then created a Tkinter GUI, and programmed it to take picture through the webcam every 2 seconds. That was fairly easy. The hard part was figuring out how to use the model generated by the python script that I got from the Jupyter file! You see, there are many online resources about how to make a model, but very few about how to use them! Thankfully, there were amazing mentors at the event, and one of them, Alexander Markley, helped us out! Basically we used the keras library to transform the image into an array, which was then transformed into a probability function using SoftMax. Then the result was fed into the GUI, which would turn red if the person was not wearing a mask!
Unfortunately, it didn't really work very well. It worked sometimes, but not all that often. I don't know if it was my code or my partner's code, but something was off (I want to find the issue at some point but I haven't yet). I suspect it was on my end because my partner's plot was accurate, so I'm planning on solving the issue 'at some point.' The code is here if anybody is interested.
One of the requirements for the hackathon was to make a video of you presenting your work, which makes sense since it's a virtual event and videos are easy to judge. You have to present in person as well which I suspect is to prevent straight up lying using video editing, but much of the judging is based on the video. So from 3am to 4am, when the previous night I had gotten five hours of sleep, I made a video. The video wasn't good in the first place, but it was made even worse because after I had already made it I realized it could only be five minutes long! So I shrank some of the clips, resulting in our voices coming out as VERY high pitched. We sound kind of like hamsters. It's pretty hilarious, but hey it was VERY late and I was quite tired.
It was four in the morning my time, 7 in the morning MIT time, when I finished my work. So I
went right to sleep and let my partner who was on Korea time, so it was 8pm, present.
Unfortunately, something happened. He said that the PennApps people weren't at the hopin
location (the site that was kind of like zoom and the judges used) and they said that we weren't
there. I'm not sure who was correct, but our work did not end up getting judged!
That was certainly a bit frustrating, because I really thought we had a chance of getting top ten. It worked some of the time and while that sounds really bad, that's pretty much the bar in most hackathons, even an awesome one like PennHacks. Even though that didn't end up happening, I still got a lot out of it! I learned a lot about computer vision, and I got to meet somebody who knows a professor at Penn who works in Machine Learning! He introduced me to the professor by email and hopefully I'll hear back from him eventually.
Hey, could y'all do me a favor and re-subscribe? There used to be 14 subscribers when I was on
wordpress, but none of you have re-subscribed to the new blog! It's
literally just my
parents very few people right now.