Artificial General Intelligence Possibilities and Challenges
-
Ever since teenage years, dream of building AI “as intelligent as myself or as intelligent as a typical human”
-
Path to get there “is not clear and could be very difficult”
-
Timeframe uncertain: “decades… within our lifetimes, or… centuries or even longer”
-
AI contains two very different concepts:
-
ANI (Artificial Narrow Intelligence)
- Does “one thing, a narrow task” sometimes extremely well
- Examples: smart speakers, self-driving cars, web search
- Has made “tremendous progress” creating “tremendous value in the world today”
-
AGI (Artificial General Intelligence)
- AI systems that could “do anything a typical human can do”
- Despite ANI progress, “not sure how much progress, if any, we’re really making toward AGI”
- Misconception: progress in AI necessarily means progress toward AGI
-
Limitations of current neural networks for AGI:
-
Modern neurons are vastly simplified
- “A logistic regression unit is really nothing like what any biological neuron is doing”
- “So much simpler than what any neuron in your brain or mine is doing”
-
Limited understanding of brain function
- “To this day, I think we have almost no idea how the brain works”
- “Fundamental questions about how exactly does a neuron map from inputs to outputs”
- Simulating the brain “will be an incredibly difficult path”
-
The “One Learning Algorithm” hypothesis - source of hope:
-
Experiments suggest same brain tissue can perform different functions
- “Same piece of biological brain tissue can do a surprisingly wide range of tasks”
- Suggests “intelligence could be due to one or a small handful of learning algorithms”
-
Evidence from neuroscience experiments:
- Auditory cortex learns to see when rewired to receive visual input
- Somatosensory cortex learns to see when fed images
- Brain adapts to novel inputs:
- Camera-to-tongue voltage patterns allow blind people to “see” with tongue
- Human echolocation through clicking sounds
- Haptic belts creating direction sense
- Third eye implanted in frog - brain learns to use input
-
Human brain is “amazingly adaptable” (plastic)
-
Can adapt to “bewildering range of sensor inputs”
-
Question remains: “What is the algorithm, and can we replicate this algorithm and implement it in a computer?”
-
Working on AGI remains “one of the most fascinating science and engineering problems”
-
Important to avoid overhyping
-
“I don’t know if the brain is really one or a small handful of algorithms”
-
“I don’t think anyone knows what the algorithm is”
-
Hope remains alive for discovering an approximation someday
-
Even without AGI, neural networks are “incredibly powerful and useful set of tools”
-
Valuable for applications without attempting human-level intelligence
Note: The path to AGI remains speculative with significant challenges, but the one learning algorithm hypothesis provides some basis for hope.
AGI represents an ambitious goal in AI research, distinct from the narrow applications showing current success. While simulating the brain directly presents major challenges, evidence of brain plasticity and adaptability suggests there may be fundamental learning principles we could potentially discover and implement.