(with apologies to Thomas Nagel)
How I'm Not Writing a Book About Bots
Over a year ago, I had been sufficiently inspired by my experiences at DARPA's Grand Challenge to contemplate a book-magnitude project on the topic of the coming growth of robotics. I had gotten to the stage of talking to trusted friends about approaches to the project and topic, and contemplating the necessary 'field work', when I was rudely interrupted by my accident and consigned to bed or home for four months. That didn't interfere with my net access, so I continued the pursuit virtually. But what I learned caused me to drop the project by the time I was back on my feet. I had consigned that effort to the "some things don't work out" category, until a recent dinner with one the aforementioned friends persuaded me that the experience and my conclusions might be worth a blog essay, if not a book.
What I Learned From the Sensor Market
Sensing is a topic that periodically inspires futurists, venture capitalists and would-be inventors. The first order logic is impeccable: Everything is going digital. Computing infrastructure is getting cheaper, more networked and widespread. To become more useful and even more ubiquitous it needs to be aware of its environment beyond the sporadic input from humans. Ipso facto, sensing will become a growth market, worthy of venture capital and lots of trade shows and (virtual) ink.
It's at the second order that things become problematic. When you add the 'what' to sensing the grand logic breaks down. Determining (for instance) temperature, geo-location, dissolved oxygen, or pathogenic presence are all sensing tasks, but the actual technology used to accomplish them has very little in common. There's little similarity beyond eventually producing a byte stream of data. Looked at more closely, sensing as a market begins to break down into a collection of vertical niches with total revenues in the tens of millions to small hundred of millions, with only a fraction available to an upstart. To the extent that larger companies grow in the space, it's often due to sales channels synergy of peddling a portfolio of sensing devices to the makers of automation and control systems. Not really conducive to getting the VC juices flowing.
Periodically comes a potential technological and market shift that can be argued to convert sensing into the scale market that everyone would like it to be. Bioterrorism or other WMD threats could blow a $10m's category up by an order of magnitude. Carbon nanotubes might be a common technology for a wide variety of environmental sensing, finally producing a real platform play. Or sensors will become small, cheap autonomous 'motes', creating the need for a common operating and management system. None of these plays have hit - yet - but hope springs eternal. Meanwhile, sensing remains a collection of niche markets.
But What About Prof. Nagel?
The post title is a play on Thomas Nagel's essay What Is It Like to Be a Bat? (pdf). This is a well-known work in the philosophy of the mind-body problem. Nagel's work suggests that it is impossible to separate mind - specifically consciousness - from its physical embodiment. The reason explored is the inescapable binding of biological mind to the sensorium of the body. Nagel's title example is the echo-location capability of the bat, another sense beyond those available to humans. So while a human intelligence can attempt to reason abstractly about the mental life of a bat, the notion that we can in any sense share the consciousness of the bat is specious. The binding between embodied senses and embodied mind is not breakable.
This point of view of course not a popular one with 'strong AI' advocates whose goal is precisely the separation of some form of mind from a biological container. And indeed the copy of Nagel's essay linked above features a skeptical introduction by AI advocate Daniel Dennett.
Now let's leave the bat and presume a bot. A bot, of course, is "...a machine--made so.", as was said of an early fictional robot. Until and unless we get self-reproducing and evolving bots, they will be constructs. Which begs the question of who will be doing the constructing, and to what end. And brings us back from high philosophy to the reality of markets and technology.
Dull, Dirty and Dangerous
These are the traditional hallmarks of tasks where a bot might be a better solution than a human. Whether it's scouting a battlefield (dangerous), eldercare (boring), or cleaning up contaminated sites (all of the above), 3Ds has been a touchstone for thinking about robot markets. But it has the same flaw as the 'sensors' market framing: When drilled down to specific tasks and applications, the requirements often diverge. And they differ in ways illuminated by both Nagel and sensing.
The physical form of the robot is the most obviously divergent. A simulacrum of human form may be useful when the task involves human interaction, such as entertainment or healthcare, simply due to social expectations. But more often the form is optimized to task performance - robot planes and fish to snoop and spy, small tracked battlefield bots, robot trucks to haul supplies, or plastic shelled geometric shapes skittering across carpets and through gutters. The next generation becomes even smaller and more form-adapted: snakes to crawl through rubble, midget ornithopters to perch and stare.
The second divergence is sensing. A task-optimized robot is going to omit the analogues of human senses that are not useful to the application: The stratospheric spy plane doesn't need smell or touch. They will get 'super' versions of other human senses: The robotruck has 360 degree vision and the spy plane will have a kinesthetic feel for flight that only comes to expert pilots. And they will have senses impossible to the human. Echo, radar, lidar and chemical sensors already form part of the robotic sensorium. The healthcare bot will know from biosensors embedded in the patient's body in a way that no chart-reading doctor can. What we must infer, the bot may experience directly.
No Scale, No Platform
Venture capitalists talk about 'scalable' markets and companies. They are looking for a solution that - once proven - can be quickly reapplied to generate the ferocious growth that makes a venture investment profitable. Technologists talk about platforms - a physical or virtual system that underlies multiple solutions, by forming a standard basis onto which needed features can be added. Both are a way of speaking about 'horizontal' services and components that support multiple markets.
I conclude that bots are inherently 'vertical' - a set of similarly named artifacts that are in fact intensely optimized solutions for specific problems. There is little commonality in their physical embodiment, and little market pressure to accept suboptimal common forms for the sake of scale economies. Likewise sensing elements are already as divergent as the original sensors market, and may become more so as we get better at sensory integration at the software level. The only common element I can find across applications are in fact 'reasoning' capabilities such as Bayesian networks, and there also the skill is not in the basic technology, but in application to the particular task environment.
If I'm right, then robot technology and markets cannot be analyzed as platforms or 'scalable' businesses in the horizontal sense. Each application market must be looked at on its own merits, just as in sensing. The market and technology skills won in one application will be only loosely applicable to the next. Specific form dominates generic function.
And that's why I'm not writing a robotics book. I'm both a VC and a technologist, and it's the horizontal play that keeps me going. Instead I'll be reading domain specific books like this and seeing if my logic holds true.
The Mind of a Bot
Nagel was working forward from first causes, and his logic has been used as a critique of the project to abstract human intelligence that has dominated AI from its inception. (I've long suspected that the Turing test, as the archetypal expression of this trend, has been one of the biggest ratholes in the history of technology.)
The advent of bots will give us another angle on the intelligence problem, looking from the machine backwards. If the consciousnss of a bat is inherently non-human, what can we say about the 'mind' that we might find in a construct that does not share our form and will have only a few senses in common? Whatever intelligence may emerge there, it's not going to be human.
Looking at the physical embodiment of the bot can also be deceptive. When the task allows, the mobile portion of the bot is often networked to remote sensors, and to command and control logic from elsewhere. We can recognize sessile bots, such as modern power distribution systems, that never move but work through networks of sensors and effectors. If we abstract the definition of the robot to a combination of environment sensing and effectors with intervening logic, and translate it to the virtual realm, then it's hard to argue that (for instance) Google is not a bot in-the-large, sensing the patterns of the Web and its readers, and seeking to modify them with search results and ad placements. Which might give new legs to another old science fiction plot.