The long read: If we could understand how the newborn mind develops, it might help every child reach their full potential. But seeing them as learning machines is not the answer
Deb Roy and Rupal Patel pulled into their driveway on a fine July day in 2005 with the beaming smiles and sleep-deprived glow common to all first-time parents. Pausing in the hallway of their Boston home for Grandpa to snap a photo, they chattered happily over the precious newborn son swaddled between them.
This normal-looking suburban couple weren’t precisely like other parents. Roy was an AI and robotics expert at MIT, Patel an eminent speech and speech expert at nearby Northeastern University. For years, they had been planning to amass the widest home-video collecting ever.
From the ceiling in the hallway blinked two discreet black dots, each the size of a coin. Further dots were located over the open-plan living region and the dining room. There were 25 in total throughout the house- 14 microphones and 11 fish-eye cameras, part of a system primed to launch on their return from hospital, intended to record the newborn’s every move.
It had begun a decade earlier in Canada- but in fact Roy had built his first robots when he was just was six years old, back in Winnipeg in the 1970 s, and he’d never genuinely stopped. As his interest turned into a career, he wondered about android brains. What would it take for the machines he made to think and talk?” I thought I could just read the literature on how children do it, and that would give me a blueprint for build my language and learning robots ,” Roy told me.
Over dinner one night, he boasted to Patel, who was then completing her PhD in human speech pathology, that he had already generated a robot that was learning the same way kids learn. He was convinced that if it got the sort of input infants get, the robot could learn from it.
Toco was little more than a camera and microphone mounted on a Meccano frame, and given character with ping-pong-ball eyes, a red feather quiff and crooked yellow bill. But it was smart. Utilizing voice recognition and pattern-analysing algorithm, Roy had painstakingly taught Toco to distinguish words and ideas within the maelstrom of everyday speech. Where previously computers learned speech digitally, understanding terms in relation to other words, Roy’s breakthrough was to create a machine that understood their relationship to objects. Asked to pick out the red ball among a range of physical items, Toco could do it.
Patel ran an newborn lab in Toronto and Roy flew up there to consider what he could learn. Observing the mothers and newborns at play, he realised he’d been teaching Toco poorly.” I hadn’t structured my learning algorithm correctly ,” he explained to Wired publication in 2007.” Every parent knows that when you’re talking to an 11 -month-old, you stay on a very tight subject. If you’re talking about a beaker, you stick to a beaker and you interact with the beaker until the baby gets bored and then the cup goes away .”
His robot had been searching through every phoneme it had ever heard when it was learning a new object, but Roy tweaked its algorithm to give extra weight to its most recent experiences, and began to feed it audio from Patel’s baby lab records. Suddenly Toco began to build a basic vocabulary at a rate ever seen before in AI research. His dream of” a robot that they are able learn by listening and find objects” felt closer than ever. But it needed to feed upon records, and these were hard to find.