X

AI made it easier for hackers to write malware, FBI says

Featured image for AI made it easier for hackers to write malware, FBI says

The Federal Bureau of Investigation (FBI) says hackers use AI to write malware. The agency declared AI tools have made it much easier for bad actors to write and spread malicious programs or phishing emails.

Every useful tool can become a dangerous weapon if it falls into the hands of the wrong people, including AI. Artificial intelligence can be used in various fields and for different purposes like healthcare or enhancing productivity. However, the FBI is now complaining about free, customizable open-source AI models that are writing malicious codes for hackers.

Advertisement
Advertisement

Hackers usually need a high level of programming knowledge to write strong, hard-to-detect malware. On the other hand, the currently available AI tools allow almost everyone to build a program (or malware) without having any programming knowledge. The generative AI placed at the heart of these tools can write or debug the codes like an experienced programmer.

FBI says AI is writing malware for hackers

The agency’s warning comes as cybercrime continues to rise and hackers become increasingly sophisticated in their methods. The malware created by AI is usually hard to detect and more difficult to delete once it has infected a system. AI also makes it easier for hackers to launch attacks on a larger scale, targeting multiple systems simultaneously.

Deepfakes are another tool that fraudsters use to their advantage. Deepfakes are movies or photos that have been artificially enhanced to make them look realistic. They can be used to mimic other people online or spread misleading information like propaganda or fake news. This way, a victim might be persuaded into performing specific actions, like making a payment or sending private information.

As AI is rapidly growing, many governments are still struggling with designing and implementing protective measures against potential harm. The White House has recently called for a “voluntary commitment” program to mitigate AI risks with the help of major tech companies like Google, Amazon, Meta, etc.