Hackers Can Use These 5 AI Technologies In Horrific New Ways

Hackers Can Use These 5 AI Technologies In Horrific New Ways

Hackers Can Use These 5 AI Technologies In Horrific New Ways

AI technologies can disrupt many industries, but in most cases, we can see that they will be more helpful than harmful in the long run. However, these new tools also open up new opportunities for nefarious types.

Natural Language AI for Supercharged Phishing Attacks

The ability to understand and produce natural human language has been a primary focus of AI research since the beginning. Today we have synthetic speech production, sophisticated chatbots, natural language text generators, and many other related technologies powered by AI.

These apps are perfect for phishing attacks, where hackers impersonate legitimate entities and their agents to extract sensitive information from individuals. With these new technologies, artificial intelligence agents can imitate humans en masse via email, phone calls, instant messaging, or anywhere humans talk to each other through a computer system.

Unlike the phishing we know, this would look like supercharged “harpoon” phishing, which attempts to target specific individuals with information about them to make the scam more effective. For example, artificial intelligence software could impersonate someone’s boss and deposit money into an account in a variation of phishing known as the CEO scam.

Deepfaked Social Engineering

Social engineering is a hacking practice that targets human psychology and behaviour weaknesses to bypass tight technological security measures. For example, a hacker could call the secretary of a significant person posing as a plumbing worker and ask where trash is currently vacant. The criminal then goes to that location to look for abandoned documents or other clues that can be busy together to create exploits.

Deep learning systems that can reproduce faces and voices (known as deep fakes) have evolved to the point where they can be secondhand in real time. There are services like Pod castle’s Revoice and Voicebot AI where you can submit samples of your voice and then have speech synthesis that sounds like you. In principle, such technology could be secondhand to clone anyone’s voice. All you would have to do would be to call or video call someone posing as whoever, with public figures being the easiest target.

Smarter Code Cracking and Automated Vulnerability Discovery

It takes people hours and hours to scan lines of code for vulnerabilities to fix or exploit. We have now seen that machine learning models like ChatGPT can write code and detect vulnerabilities in submitted code, opening up the possibility for AI to write malware sooner rather than future.

Malware that uses machine learning to learn and adapt

The main strength of machine learning is that it can extract valuable rules and information from large amounts of data. It is reasonable to expect future malware to use this general concept to adapt to countermeasures quickly.

It can lead to malware and anti-malware systems effectively becoming belligerent machine learning systems fast, straddling higher levels of complexity.

Generative AI to Create Fake Data

Artificial intelligence technologies can now seemingly create images, videos, text and audio from scratch. These technologies have reached a point where experts cannot say they are fake (at least not on the face of it). Therefore, a flood of counterfeit data can be probable on the Internet.

For example, fake social media profiles can be pretty easy to spot, so it wasn’t hard for a savvy audience to avoid catfishing scams or simple bot campaigns to spread misinformation. However, these new AI technologies could create fake profiles indistinguishable from the real ones.

“People” with unique faces with photos generated from their fake lives, unique and consistent profile information, and whole networks of friends and family made up of other affected people. They all talk to each other like real people. With counterfeit online agent networks like these, malicious actors could run various scams and misinformation campaigns.

Is AI both the disease and its remedy?

Some people will inevitably try to use new technology for malicious reasons. What sets this new generation of AI technology apart from others is how quickly it outperforms the human ability to recognize it.

Ironically, our best defence against these AI-powered attack vectors will be other AI technologies fighting fire with fire. That seems to leave you with no choice but to see how they get away with it and hope the “good guys” get to the top. Still, there are some things you can do to stay safe online, avoid ransomware, and spot scams on popular platforms like Facebook, Facebook Marketplace, PayPal, and LinkedIn.

Also read: Motivate The Upcoming Tech Entrepreneurs

Also read: Types Of CyberAttacks