Sunday, November 26, 2017

The Happy winners #nlihack 2017

About the event 

The happy winners!
Amir Aharoi, Oren Bochman and Chaim Cohen
Last week (November 23-24 2017) I had the pleasure of participated in the first National Library of Israel's Hackathon. I've been to the NLI a few times with friends from the Wikimedia movement to instruct its staff and students about editing Wikipedia. But at the hackathon the NLI opened its doors to the best and brightest minds to help out with tagging content and dissemination of its extensive image database.

The Team

You can't win a hackathon without a great team. My team consisted of seven developers which have been a part of the core community of Wikimedia developers in Israel and have been meeting irregularly since the International Wikimedia hackathon Organized by Wikimedia Israel last year in Israel.  We had met about a week before the event at the Local chapter's offices and discussed over pizza what we wanted do and what the NLI had asked us to do. I realized that the most wanted task required a longer term commitment and possibly discussion with NLI staff on a suitable upgrade of the commons mass upload solutions.

p.s. I'll be adding a note on the team members ASAP.

Oren Bochman presenting the deep learning model

The Ideas 


Another suggestion had been to learn categories via text description in the image meta data. I had a related yet more ambitious idea which was to retrain some state of the art models such as inception and FaceNet (also based on inception) to learn the contents of the images in the database. I thought that this was a viable idea since it was based on deep learning tasks using TensorFlow and its higher level interface Keras with which I as well as other team members had had considerable previous experience. The only caveat was that it would require considerable computing power than my laptop could produce to train.

The second idea which was due to the esteemed Amir Aharoni was to split up manuscripts using a heuristic and then use a chat bot to crowd-source people to convert it to text. Half the team tried this challenge which complemented my idea since any ML task benefits from crowd sourcing a suitable data set!

The Story

The incredible Chaim Cohen presenting the overall concept
The actual hackathon started out as a bust. We had practicably no internet access for a number of hours. After about six hours number of complaints we were relocated but there were additional setbacks. The scripts I had located for downloading categories from Wikimedia commons using the Pywikibot  framework proved incompatible with the more recent version and needed to be rewritten from scratch. Also interfacing with NLI databases search and high quality images  proved more tricky than we had initially expected. Last but not least was the fact that we had no idea what categories of images to train on. It turned out that pre-trained inception was no good with people. FaceNet needs alignment using a face-detector like open-CV or its own model. Finally chatting with the other team members showed they had been stalled and could not help with our challenges. At this point I was read to give up and go to sleep but Chaim was inspired and talking to a Librarian during a night shift we decided to give everything a second chance both data collection and improving the models. I rebuilt tensor flow from the source and then retrained the models and we stated seeing much better outcomes as we got more images to work with.

The Presentation

You can have a perfect project but the presentation is what wins the hackathon. I took a nap and when I got up  there were three hours on the clock so we got to worth on the presentation where again Mr Cohen played a vital role. He told the store created a logo and infused his enthusiasm about our idea to the whole crowd. But to my surprise Chaim insisted that two more team members stand on the stage and present and we did !

Our presentation

References