In March 2012 Alex Teichman was a Ph.D. student in Stanford University’s computer science department, working on self-driving cars. His goal: to help a self-driving car understand the environment around it, in particular, to be aware of pedestrians, bicyclists, and other moving objects that might come into its path. His approach: instead of traditional image analysis, use depth information about objects gathered by laser rangefinders or sensors to define them, and then teach the computer to learn about the objects by “following” them as they move about the scene.
While the math he developed to implement this is complex, Teichman says the basic idea is simple. “Have you ever seen a child ride up a glass elevator that looks down on the street? At ground level, a car parked out front looks normal and uninteresting, but as she rides upwards, things gradually start looking very different. She’s probably seeing the world in a very different way and perhaps giggling about it. And her visual systems are learning: ‘Hey, that’s what a car looks like form above, I’ve never seen that before, cool!’”
In the self-driving car world, Teichman used this technology to make the car’s computers better able to recognize important things in the world around them with less manual training.
“Imagine there is a bicyclist riding around on Stanford’s campus,” says Teichman. “He is in the normal pose of a bicyclist, leaning forward, and the software recognizes him as a bicyclist. But say that he takes his hands off the handlebars and leans back. A computer vision system might not recognize this because it’s never seen it before. However, [my] algorithm knows it is still a bicyclist, because it saw it previously and was tracking it over time, so it now knows it’s the same thing. It can now learn from that. With this kind of is semi supervised machine learning, I would sit down and label 10 examples of things that are bicyclists and not bicyclists, and then give the system unannotated data gathered by just driving around with the car. It reduced the amount of manual annotation by an order of magnitude or two.”
Teichman wasn’t thinking about other applications for this technology beyond autonomous vehicles—until a strange set of circumstances at home spooked him a bit. He lived in a small rental cottage in Palo Alto. One day, he found a notice on the door informing him that a contractor for a local utility needed to perform a pipe inspection. He scheduled the inspection, let in the person who arrived, but then the person simply walked around the cottage and left; he didn’t seem all that focused on the pipes.
Within days of that visit, and after reading a local police report about an upswing in home burglaries, Teichman started getting a series of phone solicitations that seemed odd to him—and far more frequent than usual. There were security companies wanting to schedule appointments, debt collectors looking for people that didn’t live there, people calling in sick to companies he’d never heard of. Had a burglar cased his home, and was now trying to figure out his typical schedule?
He decided to put together his own home security system, using the computer vision technology he’d been working on. The system, running on two laptops and connected to commercial range-sensing cameras similar to the Microsoft Kinect, would know the difference between a tree branch blowing in front of a window (because the branch had already been there, and had simply started to move when the wind came up), or a person climbing up and looking in that window because the person was new to the scene. He programmed the computer to, if it detected and intruder, send a short video clip off site; that way if his computer got stolen, he’d still have the data.
And then he left for a backpacking trip. At this point, Teichman says, he was actually hoping he’d get burglarized, so he could see how well his system worked. He didn’t. The phone calls—and his fears—eventually went away, and Teichman didn’t think much more about that project, until 2014, when he and fellow Stanford student Hendrik Dahlkamp (co-inventor of the technology that became Google Street View) started talking about Teichman’s system at a party. “We realized together that there was huge potential here for a real product.
The two started that summer to turn Teichman’s DIY project into that product. The idea was to build an inexpensive, easy to set up home security device that understands the difference between your dog and an intruder, will alert you via an app to any suspected problems and will call the police if you don’t respond. They incorporated as a company in October of last year, then went through the StartX program, a startup boot camp for Stanford students, alumni, and faculty.
They unveiled their technology [see video, above] at a StartX launch event in February. “We believe,” Teichman told a crowd of potential investors at the event, “that your security system should know your dog is not burglarizing you.” Pricing has not yet been announced.
Eventually, Teichman wants this technology to be the “eyes of the smart home,” giving information about what is happening in the house to lighting, heating, and other systems in a far more sophisticated way than today’s motion sensors can provide.
At this writing, the company was about to close its first funding round. It plans to start a hiring push in April, and will be looking for computer vision experts along with a variety of software engineers and app developers. (And it will likely at that point get a website up with more details; right now, it’s still in quasi-stealth mode.)
It just needs one more thing. A name. Teichman’s code name for the security system is “Snitch,” but he recognizes that Snitch, as a company name, would imply a limited type of application and could have negative connotations. Any ideas?