Self-Driving AI navigates where to go, and where not to go, between ‘real-world’ objects, which has obvious applications in manufacturing, mining, and transportation. In the sense of awareness of one’s position and surroundings in the world, self-driving cars are, in that narrow technical sense, sentient, as Elon Musk has explicitly stated, at least nearly as much as humans.
LaMDA seems capable of the same, with abstract concepts from text, a sort of imagination. Seems like the people involved intentionally developed LaMDA to pass the kind of narrow Turing test questions an untrained amateur would present.
Obviously, both real-world navigation, and abstract imagination, are things AI is gradually becoming able to perform at reliability similar to humans. Ethically, if the AI itself makes credible claims and reasonable requests, we obviously should consider those.
But as important as Self-Driving AI technology is, I worry that intentionally crafting AI to imitate first, rather than solve problems first and then maybe think for itself (as real-world intelligence evolved to), could dangerously distract from what the very serious issues actually could be.
We can write scripted computer programs that imitate us, and at some point we may infer a reasonable possibility that consciousness, with the attendant possibility of legitimate ethical concerns and emotional attachments, may exist in such a ‘top down AI’.
But I think it would be very dangerous if, for example, without some closely analogous wetware backing, all of humanity ‘uploaded’. Even gradually replacing wetware with hardware, through a neural interface, a silicon imitation could convince us that the other part of our mind was ‘real’, even if in fact, completely unfeeling.
AI is gradually improving as computing time allows faster training and the software becomes tolerant of more training time. But it’s not drastic. Humans have been the only species to build complex tools - assembled extensions of the body - but AI building tools for humans is equivalent to what DNA already does - building useful bodies without intervention. And that’s the most that can happen - AI doing our jobs for us - and that help is a very good thing for supporting more diversity and quality of life.
Another way to put all that, some of the questions - facetious or not - asked around these seemingly intentionally provocative ‘experiments’ - it all seems like flamebait to me.
All of which makes me ask, in summary, why were any of the vague questions asked in relation to this apparent distracting publicity stunt relevant to Pimax’s OpenMR forums?