Skip to main content

An ex-EyeTap hacked!

An ex-EyeTap hacked!

Originally shared by Chris Saari

My DIY Glass (ex-EyeTap) is up and running on Android, now with voice command enabled (in addition to facial detection, Googling, etc.). I haven't rebuilt recording yet a demo video will have to wait (or be shot through the Microoptical SV-6 display) http://blog.chrissaari.com/2013/03/03/diy-glass-lives-on-android/ #glassexplorers  
http://blog.chrissaari.com/2013/03/03/diy-glass-lives-on-android/

Comments

  1. Hey Chris- what do you mean by "facial detection"? You mean full blown matching of images, as in biometrics? Where is the data stored? On board? And at this stage only for personal convenience? How long do you think it will take (best guess) to go mainstream? E.g. police using this technology to check for suspects/criminals in crowds, like ANPR is used for vehicles today in real-time?

    ReplyDelete
  2. I'm running always-on facial detection of the video stream on board the Android cpu, when it finds a face it will send the images up to the server for that particular face ID (blob), if I have that function turned on. The server then attempts facial recognition based on the image(s) and geolocation.

    The hard part isn't getting this working, the hard part is getting enough contextual information to keep the set of faces you're trying to match small enough that the accuracy remains high. It is very hard to identify random people on the street accurately unless you already know a small set of people you're looking for and you're fairly certain they'll be in the set of people seen. It's great for remembering names of people you've already met, or people you would expect to meet in a specific context. Some interesting work has been done on identifying random people on the street in the context of a college setting with the facial training information being scraped off Facebook, go through the Privacy in the Age of Augmented Reality slides: http://www.heinz.cmu.edu/~acquisti/face-recognition-study-FAQ/

    For the police situation it may work if they have a small set of wanted people as the training set and then to use it as a way to filter the video streams and identify specific ones a human should then take a look at. The false positive rate is too high to have it be totally automated. But there are better ways that are more robust for finding specific people, like cell phone tracking (cellular, Bluetooth and wifi), internet use tracking, or gait analysis.

    ReplyDelete
  3. Indeed. http://www.igi-global.com/book/innovative-automatic-identification-location-based/599 (ch8 on biometrics).

    Alessandro Acquisti would have been a bonus to have at ISTAS13. He has done a lot of great stuff in the social media/webscraping space and highlighting issues related to privacy etc... http://www.heinz.cmu.edu/~acquisti/face-recognition-study-FAQ/acquisti-faces-BLACKHAT-draft.pdf

    But I am more thinking of the implications of building "open" public "people" image repositories if we keep going down the route of big data/open data, just like we have for static things like streets and buildings. There are already companies investing big time into this approach despite that they are not coming out publicly and declaring it given privacy legislation etc with respect to data collection.

    Yes I imagine if you had the feature turned on by default on your glass that it would suck up resources pretty quick and bring other processes to a standstill. But I can also see how this kind of things could be beneficial with respect to personal convenience apps. The other measures you suggest can be easily duped- change IP address, stop transacting with post-paid mobile or stop using a mobile phone altogether, and even deliberately change gait etc. Chilling effects will certainly happen if this becomes mainstream tech.

    Would love to read more about your work. Send us some links please.

    ReplyDelete
  4. Personal data repositories like Spokeo.com have data that is potentially much more sensitive than my name tied to my face. Social media is a bigger test separation of private vs. public information and I see that being a much more practical problem for most people than a facial recognition DBs. It's being able to search the publicly available embarrassing photos that causes most people problems, facial tagging just makes it one step easier than trolling the public data manually, but the photo has to be posted publicly to begin with. An an example John McAfee's location was given out by a photo's geolocation tag, and he let that photo be taken while actively trying to hide.

    If people locked down the data they're leaking in 1000 other ways then facial DBs may be more concerning, but right now it's a small concern compared to using Facebook or pretty much any web service IMO. People need to think about behaving online as they would about standing in the middle of a crowded street... where everything is recorded 24/7 with indexed searchable data streams.

    If you're interested in my work I'm happy to answer any questions, I haven't written much about it beyond what is on my blog at chrissaari.com

    ReplyDelete
  5. Interesting argument. Someone should define what an embarrassing photo is: http://veillance.me/blog/2013/3/12/censoring-with-glass-yes-video-has-its-limitation-too
    Hmm... data repositories and sensitive data- a little birdie tells me to expect a paper on this at ISTAS13.
    If you were to come to ISTAS13- I am curious what you think the most important question(s) you'd want discussed at the meeting would be?

    ReplyDelete
  6. My point of view is that the genie has long been out of the bottle in terms of privacy eroding as a result of technology, so I'm most interested in shifting the discussion toward how we should adapt to that reality as a society. How will our culture need to adapt? For example, asymmetric laws around whom can surveil (photos, video, voice recording) are ineffective, not even authoritarian states have been able to stop the spread of cell phones. We need a more realistic approach than saying "don't do that". Approaches like "don't ask don't tell" are simply not effective when you can "ask" simply by typing someone's name, or maybe even just by looking at them.

    I'm interested in thinking about the inverse thinking; If laws are not going to be effective in stemming the problems, in protecting us, then how do we protect ourselves? A market for antivirus software and security professionals exists, will they subsume privacy or will it be a new market? Is it is an educational agenda item to teach people about safe technology use? Should we build technology that is actively doing data collection and analysis with the express purpose of protecting ourselves proactively?

    I'm most interested in discussing what all the non-privacy related effects of wearable tech are going to be, what are the unintended consequences of having something like TruthTeller running 24/7 analyzing everything you see and hear? The implications for politics not to mention social interactions are profound. I touch on this very briefly in my post http://blog.chrissaari.com/2013/02/02/proactive-computing/

    ReplyDelete
  7. :) Glad we have Ann Cavoukian speaking on Privacy By Design!! I think you'd enjoy what she had to say. We can preserve privacy and advance technologies at the same time.

    I think you raise some very pertinent questions regarding who has the right to surveil who. The police, for instance, are finding themselves in a difficult situation at the moment. They use body worn video recorders, and yet there are laws that they must uphold in Australia, such as the Surveillance Device Acts. How can the police be seen NOT to enforce a state law?

    On the matter of unintended consequences of wearables- I am with you. MG Michael and I have long spoken of "other" consequences too- health is a major one (e.g. obsessive compulsive disorders or ergonomic issues), psychological (living in the past vs the present), liability/insurance etc.

    Thanks so much for inspiring this discussion! Spot on!

    ReplyDelete

Post a Comment

Popular posts from this blog

Hey Kevin- sorry you cannot make it to ISTAS13.

Hey Kevin- sorry you cannot make it to ISTAS13. You were the first person I thought of when planning for keynotes after Marvin Minsky- you really were instrumental in pushing veillance as a domain, as soon as you embraced uberveillance...  http://www.kevinwarwick.com/   http://www.kevinwarwick.com/

Folks, we have a run on tickets and they are fast going - if your in Toronto Canada on the 30th June and have any...

Folks, we have a run on tickets and they are fast going - if your in Toronto Canada on the 30th June and have any interest in 'skynet'  (ps. or coming to ISTAS13 to meet Matthew Schroyer  then perhaps it's time you grabbed a FREE ticket and joined the crush at Ryerson - more info here -  http://uberveillance.com/uav-pros-cons