I adore Jeff Jonas work for IBM, and his take on Big Data. So from time to time I check his blog. I stumbled upon his update on the G2 sensemaking engine a while ago. As I reread it today a thought struck me: One of the limits to AI stems not from the algorithms deployed, or their processing power. But from their access to input, to data. From their lack of senses, if you will.
A human infant is born with all 5 senses wide open, and an infinte stream of information constantly available, or, more concise, unmutable. Human senses seem custom tailored to interface reality. Much has been written about the ability of the unconcious to parallel-process Megabits of information vs. the 7 or so bits the concious mind can access simultaneously.
Computers on the other hand have to rely on humans to feed them information. Now we have two problems at hand here:
1) Translational loss: As information is digitized, a lot of context gets lost and left out, equaling a substantial bandwidth reduction.
2) Selection bias: In decinding what to feed an algorithm, we choose what’s important for us, vs what would be optimal for AI performance. A nontrivial issue as algorithms scale in complexity.
This in turn severely limits an AIs ability to truly learn and scale. Now I don’t claim to be an expert on AI. But this clearly merits some consideration. If you have any input or information on how this is addressed please share.
Leave a Reply
You must be logged in to post a comment.