ChatGPT and language models like it threaten to undermine some of the technologies the US government has been using to monitor and prevent terrorist activity. It should be possible to create a script that opens email accounts and usesVPNs to make it look like those accounts are located in different places. Then ChatGPT---or an algorithm like it---could overwhelm the government’s keyword monitoring system by mimicking a conversation between different members of a terrorist organization.
I guarantee you, with the right prompt generation, you could get it to very convincingly mimic a conversation between a network of terrorists planning an attack.
The Russians, Iranians, or even Chinese could attack us indirectly---not with a particular action but by making the process of monitoring terrorists impossibly difficult. Then they could simply send a little money around and wait till one of the organizations they provided cash and cover for took action.
Kind of a troublesome thought, actually. Not that I like the mass surveillance that the government has going on. But it probably is the reason that no terrorist attack on the scale of 9-11 has occurred since.