Twice in the 1990s, I visited the MIT Media Lab — now run by my friend and all-around great guy Joi Ito — and had the opportunity to see some interesting technologies each time. On one visit, I watched a demonstration of eye-tracking technology. The researcher’s goal was to create an inexpensive device that could sit on top of your monitor and always know what you were looking at on screen. He was thinking of it as an input device, either hands-free or to improve other input devices in an adjunct fashion. Watching the demonstration, I thought to myself that computer programs of that era — notably word processors and spreadsheets — were they to be anthropomorphized, would lead schizophrenic existences in which they sat around with no input coming in until a user pressed a key or moved the mouse. I thought how hard it would be to get to anything like intelligence in such an environment. My naïve thought was that something like eye tracking could help.
I now think I was off on that, though in a very vague sense I was on the right track. It’s hard for me to imagine us getting to anything like intelligence without some form of ubiquitous data input. By this I don’t mean that ubiquity alone gets us to intelligence; far from it. I disagree with the implications of this exchange from Star Trek: The Motion Picture (spoiler alert, though the movie is 33 years old at this point):
Voyager VI… disappeared into what they used to call a black hole.
It must have emerged sometime on the far side of the galaxy and fell into the machine planet’s gravitational field.
The machine inhabitants found it to be one of their own kind: primitive yet kindred. They discovered its simple Twentieth Century programming: collect all data possible.
Learn all that is learnable. Return that information to its creator.
Precisely, Mister Decker; the machines interpreted it literally. They built this entire vessel so that Voyager could fulfill its programming.
And on its journey back it amassed so much knowledge, it achieved consciousness itself. It became a living thing.
When I say that I disagree with the implications of the exchange quoted above, what I mean is that I don’t think that if we feed a machine enough data sources it will “wake up” in any sort of cognitive sense. That would imply an epiphenomenon in which a ceaseless flow of information eventually creates some sort of tipping point in which the result is sentience. Were that true, the Internet itself should already have “woken up”. (I look forward to comments claiming that this has in fact happened and that we just don’t realize it.)
That said, I do think that more information about the world would help our machines become more adaptive, responsive — and, yes, ultimately, intelligent, given many other developments. The more our machines know about us — about where we are and what we’re doing at any given moment in time — the easier it will be for them to adapt to us, to predict what we might do next, and act in anticipation of that.