Once iOS 13 gets rolled out later this year, I think the prospect of AutoCorrect for your face is going to prompt a great deal of debate:
Why should my phone decide where I should be looking?
Kevin Kelly in a slightly different context, looks forward to a future where the system gets that crucial 10% better at automagically adjusting what it shows the person you’re communicating with rather than faithfully presenting what the camera/microphone at your end of the connection are picking up. It’s all for your own good really:
When a colleague is teleporting in from a remote place to appear virtually, it is relatively easy to translate what they are saying in real time because all that information is being captured anyway. For even greater verisimilitude, their mouth movement can be reconfigured to match what they are saying in translation so it really feels they are speaking your language. It might be even be use[d] to overcome heavy accents in the same language. Going further, the same technology could simply translate your voice into one that was a different gender, or more musical, or improved in some way. It would be your “best” voice. Some relationships might prefer to meet this way all the time because the ease of communication was greater than in real life.
Ease of communication being greater than in real life may not be worth having if the price of that ease is accuracy. Kelly (and Apple, implicitly) assumes that technology can be trusted to be our intermediary, but our experience of AutoCorrect operating on plain text tends to suggest otherwise.
[Rachel Coldicutt tweet via [Interconnected)]