
Kosko explains how fuzzy logic differs from traditional binary computing by thinking in shades of gray rather than strict yes-or-no categories. He describes its practical applications in everyday devices from camcorders to car transmissions. The discussion turns to neural networks and how they learn to recognize patterns through examples rather than rigid rules, drawing parallels to how the human brain processes information.
Art presses Kosko on whether systems like Google are becoming genuinely intelligent and whether AI could eventually replace human experts in fields like medicine and law. Kosko draws on his own experience as a former LA County prosecutor to illustrate why emotional intelligence and real-time human judgment remain beyond the reach of current machines. He argues that without hormonal systems driving will, greed, and self-awareness, computers are unlikely to become the self-aware threats of science fiction.
Key Moments
Why self-awareness is a bad test for AI: Kosko rejects the classic 'machine wakes up' definition of AI, arguing it depends on the homunculus fallacy - imagining a tiny observer inside the brain - which simply pushes the question down a level.
Football fields of supercomputers to fake one brain column: Kosko explains that IBM's Switzerland project consumes massive computing power to simulate just one cortical column of a rat brain, and the full human cortex would require many football fields of those supercomputers.
Carbon nanotube antenna where noise improves the signal: Kosko describes a 2006 IEEE paper where a carbon nanotube one hundred-thousandth the width of a hair was used as a tiny antenna - and the inherent quantum noise made detection better, not worse.
Fuzzy logic - thinking in shades of gray: Kosko defines fuzzy logic as reasoning in continuous degrees rather than Aristotle's binary yes/no, using a pink rose that is 80% red and 20% not as the canonical example.
Doctors fail Bayes - 19 of 20 misread cancer odds: Kosko cites an Economist study showing 19 of 20 physicians got cancer diagnosis probabilities wrong because humans rarely update their beliefs correctly when new evidence arrives - a basic Bayesian failure even getting a second opinion can't fix.
