After really loving the experience last time, some friends from TUM and I decided to participate in this year’s edition of Hackaburg (a student-organized hackathon in Regensburg) as well. This time, we were a total of four friends back from undergrad: Ute, Christoph, Robert and me. Ute won last year’s Hackaburg with kooci (a gesture based voice assistant to help even people as untalented as me cook) and Christoph and I won the BMW Innovation Award with our team aMUSEmeasure working with brain waves. For Robert, it actually was his first Hackathon – so we had the perfect mixture of old tired veterans and fresh wide-eyed minds 😉
This year, we once more came without a concrete idea in mind, so while eating dinner we started brainstorming. We decided pretty early on that it would be fun to do something with hardware once more. After going through MLH’s list of hardware, we were intrigued with the myo armband which is able to recognize gestures. We borrowed two of those and started thinking about what we could do with them, settling on two different ideas: Either using them to compose music, or porting everyone’s child-time favorite Rock–Paper–Scissors so you can play it remotely.
Our final product: Employing two myo armbands, we recognize gestures of both hands and link them to music samples. You can compose your own tunes step-by-step: simply select the samples that you feel belong there. In order to make things even more interesting, you can also dynamically control the volume and adjust the speed of your creation by moving your arms up and down.
Without definitely deciding what to do in advance, we set forth and started implementing the parts that would be similar for both ideas. We settled on creating an application for MacOS, mostly because we had two Macs and none of us had ever done anything with MacOS before. Looking back, this might have been the type of decision that ends up costing you a whole lot of time and really putting your patience to the test.
The API for the armband forced us to use C++, while we needed to use Objective C to have access to the APIs which MacOS provides for playing sounds. In principle this sounds simple as Xcode offers support for this scenario. However, when we tried it, we were presented with cryptic error messages that told us we could not assign a RHS to a variable that had exactly the same type – supposedly there was no way to cast between the different types. After a lot of trial and error, we found an article that explained how to correctly wrap our Objective-C code in C++ (using among others the Pointers to Implementation idiom) and we finally got it to run.
As we were rapidly approaching the last 16 hours of the contest, we had a working command line application running that also managed to make some good noises. Nevertheless, we felt like our project would really benefit from a GUI to show the users what the software is doing at any given moment. This was where our decision to target MacOS came back to bite us (not just once but twice).
Our first issue was that once we ported our code over into a GUI project and launched it in a separate thread, it was no longer able to connect to the armbands. At that time, what we did not know was that sandboxing caused this problem. It apparently applies to applications using Cocoa but not to command line tools. So after failing to make progress for a while, we decided we wanted to go for two separate executables as a workaround: One to process gestures, generate sounds and produce text that the other one can parse and display in a nicer way. While building this, I tried mocking the application that handles gestures and sounds by simply getting continuous input from top. The error message we were presented with was about not having sufficient privileges, so after wasting barely 6 hours we were able to get back to our initial setup with just one executable.
Now for the second problem: None of the people still awake at this point had any idea of how the drag&drop based tools for creating interfaces in Xcode worked (they are not as straightforward as it may sound). So everyone slowly lost faith and went to bed accepting the fact that there would be no GUI. Everyone, except for Robert that is, who apparently had a lot of midnight (or 3am 😉 ) oil to burn. When I came back the following morning, he had pretty much solved all the issues we encountered and presented us with a functioning GUI that just needed to be connected to the rest of the application, which we ended up doing in less than an hour.
After achieving this, we finally switched our samples one last time to make them harmonize better. Then, we demoed our application to two judges. (That is one of the things they had changed compared to last year: there were two rounds of judging with the first being more technical and with more time and the second round involving pitching in front of everyone on the big stage). While we were pretty happy with our results, the judges seemed considerably less so and did not even want to try the experience themselves. So we did not have too high hopes that we would make it to the next stage which made us all the more excited when we found out that we had in fact done it. And thanks to Ute’s kickass pitch and Christoph’s musical talents while demoing the experience (and of course also thanks to Robert and me doing what we do best – looking competent) we managed to convince the big jury once more and ended up taking home not just a lack of sleep and a lot of good memories but also some medals 🙂
In case you want to find out more, check out our devpost submission or head straight to GitHub for the code. If you speak German, be sure to give this article in the local paper a read too.