As organizations accelerate digital transformation, cybercriminals have kept pace.
Today, bot traffic makes up more than half of all internet traffic, with malicious bots accounting for most of this.
Unsurprisingly, bot attacks or automated online fraud have become one of the biggest threats to online enterprises today.
The risks that businesses face are more frequent and more sophisticated, led by intelligent attacks that traditional cybersecurity models are ill-equipped to stop.
At the center of this shift is the rise of malicious bots. In the AI-enabled Internet, these bots are no longer just simple scrapers or credential stuffers.
They now mimic human behavior, adapt to countermeasures, and exploit gaps in legacy defenses. And they are deployed at scale by organized groups, not isolated actors.
The result is a new class of automated threat – faster and smarter than what businesses have faced before.
The problem with legacy detection
Bots have evolved dramatically in recent years, far beyond the simple scripts of the past. What used to be easier to spot and protect against has become sophisticated and adaptive to the defenses in use.
They are becoming almost indistinguishable from legitimate customers or users, randomizing their actions and behavior to bypass traditional client-side security measures.
Traditional detection, including web application firewalls (WAFs) and client-side java script, relies on rules and signatures, acting reactively rather than proactively.
These systems look for known attack patterns or device fingerprints, but modern bots change quickly and rarely present the same signals twice.
By focusing on the intent of the web traffic rather than how it presents, rule-based systems leave businesses exposed to attacks that are more subtle and damaging.
This creates a false sense of security, where organizations believe they are protected, even as automated attacks silently erode data integrity and revenue.
The risks of legacy client-side detection
Client-side defenses rely on JavaScript or similar code inserted into the user’s browser to detect and block malicious activity. This approach introduces significant risks by extending the attack surface into the customer environment.
Because the code runs on client devices, it is inherently exposed and can be tampered with, disabled, or reverse-engineered by sophisticated attackers.
This creates the possibility of bypassing protections entirely, leaving systems vulnerable. Moreover, client-side code can inadvertently introduce security weaknesses.
Malicious actors may exploit flaws to gain access to sensitive data or execute attacks that would not be possible if detection occurred server-side, creating a path for leakage alongside the security gaps it was meant to close.
There is also the risk of impacting legitimate users. Excessive or poorly tuned client-side checks can degrade performance, interfere with user experience, or trigger false positives.
Attackers routinely reverse-engineer obfuscated scripts, strip them out entirely, or use them as a new entry point for injecting malicious functionality.
Hybrid methods, which combine client and server-side detection, carry the same weaknesses. In all cases, additional risk is introduced without delivering reliable protection.
Scraping content in the age of AI
For journalism, academia and other data-rich enterprises, bot attacks via large-language-model (LLM) scraping are becoming a significant threat.
Unlike traditional crawlers, today’s intelligent agents mimic human behavior, bypass CAPTCHA,, impersonate trusted services, and probe deep site structures to extract valuable data.
These agents transform content into training material, producing repackaged versions that directly compete with the original. Generative AI has accelerated this issue by converting scraped content into polished outputs that bypass the original entirely.
This is both a technical and commercial problem. Scraping distorts analytics by creating false traffic patterns, which increases infrastructure costs and undermines content-driven revenue models. In sectors such as publishing or e-commerce, this translates into lost visibility and shrinking margins.
The repurposed material can dilute audience engagement and reduce the value of content that companies have invested significant time and resources to create.
Netacea’s research found that at least 18% of LLM scraping is undeclared by the LLM vendors, leading to content being repurposed invisibly without attribution or licensing.
As AI-enabled scraping becomes more sophisticated, the risks grow, making it an especially pressing concern for organizations that rely on digital assets.
Addressing new threats on the AI-enabled Internet
The only effective strategy is server-side, agentless detection. By moving protection away from the client, businesses remove the risk of exposing code or creating new attack surfaces.
Server-side detection focuses on behavior and intent, which provides a clear view of how traffic interacts with systems rather than how it appears at the surface.
This becomes even more important in the new world of Agentic AI, where automated attacks adapt rapidly, take on synthetic and abstracted identities, and exploit legacy controls.
By continuously analysing behavioral patterns of intent, organizations can detect bots even when they present as legitimate users, revealing up to 33x more threats.
This approach enables defenders to remain invisible to attackers and keep pace with threats that are dynamic, evasive, and increasingly shaped by intelligent and AI-based automation.
Intelligent bots demand an intelligent defense
Bots are not going away. They are central to how cybercrime operates today, from credential stuffing and loyalty fraud to large-scale scraping and fake account creation.
The damage extends beyond immediate fraud losses: scraping erodes competitive advantage, fake accounts distort marketing data, and account takeovers strengthen the attacker’s position at the expense of the business.
As bots continue to evolve, any defense that relies on signatures, static rules, or exposed client-side code will inevitably fail.
Server-side, agentless bot management gives businesses the only sustainable option: a resilient, low-risk approach that adapts to attackers as quickly as they adapt to defenses.
When businesses understand the intention behind the traffic on their estate, they can make informed decisions about how their content is accessed and monetized.
By focusing on intent and behavior, organizations can restore control of their digital platforms, protect against attacker-driven disruption, and build long-term resilience against automated threats.
We feature the best website defacement monitoring services.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Add Comment