This is what I think every time I hear about AI watermarking. If anything, convincing people that AI watermarking is a real, reliable thing is just gonna cause more harm because bad actors that want to convince people something fake is real would obviously do the simple subversion tactics. Then you have a bunch of people seeing it passes the watermark check, and therefore is real.
I agree is probably a losing battle, but maybe worth fighting. If the metadata is also encrypted, you can also verify the time and place it was recorded. Of course, this requires closed/locked hardware and still possible to spoof. Not ideal, but some assurances are better than a future of can't trust anything.