Once the heady stuff of films like Minority Report, facial recognition algorithms are now a part of so many technologies that it’s hard to keep track of them. iPhones, iPhoto, and Facebook all try to detect bodies and faces with biometric software. Just last week, Facebook announced that its proprietary algorithm correctly identified faces 97.25 percent of the time—meaning it works almost as well as humans do.
But when these mathematical engines hunt for a face in a video, just what do they look for? How does software see a face?
The artist and writer Zach Blas has tried to show just what programs see with his project Face Cages. He’s constructed, literally, face cages—metallic sculptures that fit, painfully, onto a user’s face—that represent the shapes and polygons that algorithms use to hunt for faces.
Circuit Scribe is a rollerball pen that uses a silver conductive ink to let you create fully functioning circuits as fast as you can can draw, making it cheaper, faster, and easier to test out electronics and prototype concepts.
"Here’s how the rings work, in a nutshell. There are three detatchable rings that are worn on the the thumb and first two fingers of each hand, as well as a bracelet. As the user signs out whatever they want to say, the translation is then spoken through a digitized voice that comes from the bracelet. I’m not sure if it works real time or not, but that’s still some pretty amazing stuff. And that’s not all…
"The gesture-to-speak aspect works fine when the hearing-impaired person wants to talk to someone else, but what about vice versa? The bracelet carries the double duty of turning sound into text that runs across an LED display. It seems like the only thing these guys have left to do is actually make people hear again…"