I think it says a lot about us, and about our approach to robotics (and to AI, likely) that we coo and wow over how much robots can be made to look like us:
Not that I think the verisimilitude would be easy, but folks, these things are profoundly not-us, and the more we try to hide that away, the more it will become apparent, and bother us. In the long run, all this investment of money into research and prototyping suggests that robots are thought to be useful in some kind of economic sense–not that they ought to have to be, but that’s the premise upon which people are investing in them.
Now, which applications will it actually be so strongly desired for robots to fool us into believing they are human? I suppose receptionists ought to be semi-human at least, but I think people could live with receptionists that are noticeably not human, just as we’ve gotten used to calling phone numbers and, aggravatingly, talking to machines.
No, the only work in which we really “need” robots to be convincingly human like is sex work. Which is the big hidden assumption underlying all of this humanlike-robotics research. This is why people are all excited about robots sit there and blink and smile and talk, while robots can’t even walk as well as a two-year-old baby (let alone think independently).
In the long run, most of us are probably are lazy and complacent enough not to actually want AI. After all, we’d just insist it be like another human…
Which reminds me, I need to read Ted Chiang’s newest thing, “The Lifecycle of Software Objects.”