Natural language processing and “aware” AI drive more sophisticated malicious bot attacks

The evolution of human attacks towards bot attacks

Over the last few years of my cybersecurity career, I have been fortunate to work with professionals who have researched and developed new cybersecurity detection and prevention solutions that block high-end cyberattacks. Initially, these attacks were carried out by humans and later by sophisticated malicious robots. I felt like I had seen it all, or so I thought…

In my current role in Imperva’s office of innovation, our team had to engage in a radical shift in mindset. Instead of incubating new cyber defenses for today’s threats, we’ve been tasked with analyzing and researching trends beyond the current cybersecurity landscape to predict and prepare for tomorrow’s threats.

Today, most malicious bots disguise themselves and attempt to interact with apps in the same way as a legitimate user, making them harder to detect and block. Bad bots are used by a wide range of malicious operators; they can be competitors operating in the gray area, for-profit attackers, or even hostile governments. There are many types of bot attacksmost of them involve high volume attacks while some low volume ones are designed to target specific audiences.

Bad bots: what do they do?

Bad bots are typically software applications that perform automated tasks with malicious intent. Bad bots are programmed and controlled to perform various activities such as web scrapingcompetitive data mining, collection of personal and financial data, theft of digital assets, brute force logindigital ad fraud, denied service (DoS), denial of inventory, spam, transaction fraud, etc.

In this article, we will focus on how bad bots can evolve to adapt and engage in criminal behavior. For example, behavioral attacks designed specifically to facilitate competitive data mining, collection of personal and financial data, transactional fraud, and theft of digital assets.

How bad bots are hurting businesses today

Here are some examples of how malicious bots are used today to harm businesses:

Price scraping – Competitors scratch your prices to beat you in the market. You lose business because your competitor wins SEO research on price. Customer lifetime value deteriorates.
Content recovery – Exclusive content is your business. When others steal your content, they act like a parasite robbing you of your efforts. Duplicate content damages your SEO ranking.
Account takeover – Malicious actors test stolen credentials on your site. If successful, the ramifications are account lockout, financial fraud, and increased customer complaints affecting customer loyalty and future revenue.
Account creation – Cybercriminals operate free accounts used to spam messages or amplify propaganda. They leverage all new account promotion credits (e.g. cash, points, free games, etc.).
credit card fraud – Criminals test credit card numbers to identify missing data (eg expiration date, CVV, etc.). This hurts the company’s fraud score and leads to increased customer service costs to deal with fraudulent chargebacks.
Checking gift card balance – Fraudsters steal money from gift cards that contain a balance. This results in a poor customer reputation and loss of future sales.

For a full account of how bad bots hurt business, download Imperva 2022 Bad Bots Imperva Report.

Where can bad robots go from here?

The evolution and progress made in Machine Learning (ML) and Artificial Intelligence (AI) are remarkable; and when used for good purposes, they have proven to be indispensable in improving our lives in many ways.

The advanced chatbot AI integrates psychological, behavioral and social engineering factors. Bad AI bots can use the ability to learn and mimic the target user’s language and behavior patterns, which in turn can be used to gain blind trust in their malicious demands. Unfortunately, bad bot operators are quickly adopting these technologies to develop new malicious campaigns that incorporate artificial intelligence in ways never seen before. In recent years, chatbots have gained momentum in consumer-facing activities such as sales, customer service, and relationship management.

We see these technologies being adopted by malicious operators inspired by legitimate companies who abuse them and demonstrate the potential harm they can cause.

A notable example of this is Tay, a bot created by Microsoft. Tay was designed to mimic the language patterns of an American teenager and to learn by interacting with human Twitter users.

NOTnatural Llanguage Ptreatment (NLP), a machine learning technology, was the basis of Tay. It was the first bot to understand the text, data and social patterns provided during social interactions, then respond with its own tailored text semantics. This means that a bad bot can now adapt to the text or voice data, social and behavioral patterns of the victim it is communicating with.

In Tay’s case, some Twitter users began tweeting politically incorrect phrases, teaching inflammatory messages around common internet themes. As a result, Tay began posting racist and sexually offensive messages in response to other users’ tweets.

How AI makes a bot malicious

Interruption of service (DoS)

Malicious operators can train AIML to learn the language patterns of specific audiences and send massive messages to an organization’s resources, whether human or digital, which can confuse or overwhelm the customers for various reasons.

Sabotage of corporate and brand reputations

During various political election seasons, countries’ national security offices and social app providers have identified networks of human-looking chatbots with engineered online identities that spread false claims about candidates before elections. elections. With enough chatbots running “Mindful” AI behind them, more advanced techniques can be used to effectively weed out competitors and brands.

Guess and Scratch Coupons

Criminals who collect affiliate commissions use bad bots to guess or collect marketing coupons from legitimate affiliate marketers. These bots visit websites en masse, affect their performance and abuse the campaigns the coupons were intended for. NLP can be used to guess coupon codes, especially if they are event-related or carry a text pattern that “conscious” NLP can predict.

A hostile takeover of legitimate chatbots

In June 2021, Ticketmaster suffered a security breach caused by the modification of its chatbot customer support service (by Inbenta). The names, addresses, email addresses, phone numbers, payment details and Ticketmaster login information of 40,000 customers were accessed and stolen.

Now imagine these examples of what these “legitimate” bots can do next.

Imitation

Tinder is a dating app with around five million daily users. Tinder has warned that the service has been “overrun by bots” posing as humans. These bots are usually programmed to impersonate women and ask victims to provide their payment card information for various purposes.

These types of publicly known attacks can trick malicious operators into scaling up and interacting with enterprise users as well as consumers through email, other messaging apps, or even social apps (Shadow IT ) to build relationships that lead to trust and extraction. valuable exploitable assets.

gambling fraud

Gambling bots are used by cheaters in order to gain unfair competitive advantages in multiplayer games. There are many types of game bots intended for cheating, such as farming robots, pre-recorded macros, and the most common example – “aimbot” which allows a player to automatically aim in a shooting game.

In some cases, these robots are used to make profits. In 2019, the gaming industry was estimated to have lost around $29 billion in revenue to cheaters.

Conclusion

Cybersecurity is about to experience a major shift in its challenges, this shift may require the development of the ability to successfully mitigate cyber threats caused by “aware” malicious bots. Cybersecurity vendors will need to design new detection and mitigation technologies where identifying and classifying the reputation and text patterns of attackers and their intent is no longer sufficient. As malicious operators adopt new NLP technologies that provide personalized communication based on trust, security vendors must also act, and the sooner the better.

The machines are about to interact with victims and gain their trust by abusing their own language style and social and behavioral patterns as well as the social and behavioral patterns of their colleagues and peers. It is reasonable to predict that a new generation of “Mindful” NLP technologies will be used in more sophisticated ways to gain profit and cause harm.

Note: This article refers to users targeted by malicious interactions from “Mindful” NLP bad robots. The same principles can be reapplied in a different context: applications, their APIs and how they can be abused by “Mindful” Mmachine Llanguage Ptreatment (MLP) Bad bots.

The post office Natural language processing and “aware” AI drive more sophisticated malicious bot attacks appeared first on Blog.

*** This is a syndicated blog from the Security Bloggers Network of Blog written by Oren Gravier. Read the original post at: https://www.imperva.com/blog/natural-language-processing-and-mindful-ai-drive-more-sophisticated-bad-bot-attacks/

Comments are closed.