Researchers found that Artificial Intelligence (AI)-generated fake reports can trick cybersecurity experts knowledgeable on all kinds of cyberattacks and vulnerabilities.
The University of Baltimore used AI models to generate false information and presented it to the cybersecurity experts for testing.
They found that the experts failed to spot misinformation generated by Google’s BERT and OpenAI’s GPT.
These speech to text interface are used in storytelling, answering questions to help Google and other tech companies improve their search engines, and helping people combat writer’s block.
The researchers fine-tuned GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities.
They seeded the model with a sentence of a real cyber threat intelligence sample and made it generate the rest of the description.
The misinformation fooled cyberthreat hunters, who read the threat descriptions to identify potential attacks and adjust the defenses of their systems.
The same technique was also used on some COVID-19-related papers to generate false information on the effects of COVID-19 vaccinations and the experiments conducted, which fooled the experts.
If accepted as accurate, this kind of misinformation could put lives at risk by misleading scientists conducting research, and the general public, who rely on news for health information to make informed decisions, according to the researchers.
