Adapting to a world where we can no longer trust what we see

From a technology perspective, what the new Google Pixel 9 can do with images is remarkable. But, when a person like John Gruber, deeply embedded in the world of technology, describes the results and implications as ‘disturbing’, it is more than a little alarming.

Gruber also writes:

Everyone alive today has grown up in a world where you can’t believe everything you read. Now we need to adapt to a world where that applies just as equally to photos and videos. Trusting the sources of what we believe is becoming more important than ever.

More than ever it feels we need to be training people on how to use the internet. Though people living and breathing these things may be able to readily spot a deepfake, the majority of us won’t. But we need to be able to.

Update: Much as everything above stands, I thought Benedict Evans’ comment on Threads was worth including:

AI image generation is not the same as Photoshop. But you can’t talk about it as though Photoshop doesn’t exist and as though we don’t have a history of ‘fake’ images going back over a century. You can’t talk about this as though it’s an entirely new problem and we’ve never talked about these questions before.

AI may change the scale of the problem, but Evans is right to point out it’s not a totally new problem.

Sam Radford @samradford