It comes as a surprise to many people that not all bots are bad news. One of the reasons for this is that bots are essentially facing a marketing problem. The name ‘bots’ just gives the wrong impression. Quite purposefully, a bot sounds like a cross between a robot and a bug, a creepy-crawly slithering all over your site and accessing anything and everything it can.

It may not simply be a question of what’s in a name, however, as good bots face one other hurdle when it comes to public perception: most bots are in fact bad, and the bad bots are really, really bad. Fortunately, there are four main strategies for dealing with them.

Enthusiastic internetters

If you thought your friends were relentless in their internet browsing, let it be known that humans do not currently hold the crown for internet usage. According to DDoS mitigation provider Incapsula’s 2016 Bot Traffic Report, bots both good and bad made up 51.8% of all internet traffic in 2016.

While there was an uptick in good bot traffic thanks to the work of bots like search engine crawlers that help determine search engine rankings, feed fetchers that display content on web and mobile applications and monitoring bots that check up on the health and availability of websites and services, bad bots still outnumber good bots overall. Of that 51.8% of traffic, good bots account for 22.9% and bad bots 28.9%.

A wide variety of terrible talents

Malicious bots have a number of specialties when it comes to causing trouble. Three of the main ones are scanning websites for vulnerabilities that could allow for hacking, scraping data or content from a website for reuse, and spamming comment sections and forums. However, of all the bots – good and bad –  it is the impersonator bots that are busiest, accounting for 24.3% of bot traffic.

Impersonator bots are ones that present themselves to websites as something they’re not, such as good bots like search engine crawlers, in order to get around security measures. The most common form of impersonator bots are ones that are behind distributed denial of service or DDoS attacks.

These DDoS bots travel in botnets, huge networks comprised of tens and hundreds of thousands of bots. DDoS attackers are often looking to overwhelm a server with requests that seem legitimate, so these massive amounts of impersonator bots that can pass themselves off as legitimate visitors making these requests are essential to the success of attacks.

Detecting malice

If bad bots are free to roam your site, nothing good will come of it. Scraping is not only annoying but could tank your search engine rankings, spamming makes a website look sketchy and unprofessional, and hackings are devastating events that could result in user data or intellectual property being stolen.

It is perhaps the DDoS attack, however, that stands out as the worst of the possibilities. These attacks result in downtime, which results in angry and frustrated users, which results in complaints on social media that turn into bad publicity, and which can ultimately fester amongst users and lead to a long-term loss of loyalty. Not only that, but DDoS attacks can be used as smokescreens for hackings or intrusions that lead to those devastating thefts.

Ultimately and obviously, then, bad bots need to be stopped, but since good bots need to be allowed to roam freely it takes a careful bot strategy to deal with each category accordingly.

Three strategies to combat the bad bots

Static analysis. This involves comparing the header information of an HTTP request with what the bot is claiming to be. If the header information doesn’t match, that bad bot is booted out. When static analysis works, it’s effective, but the problem is that many bots – and the people behind them – are simply too smart to have header information that doesn’t align with what the bot is claiming to be. Good impersonator bots will slip by static analysis every time.

That’s where behavioral analysis comes in. If the activity of a bot doesn’t align with how that type of bot should be behaving on a website, that could indicate that it is an impersonator bot, and the bot will immediately be flagged as suspicious or blocked from the website altogether.

Static and behavioral analysis are the two most well-known methods of bot detection, but when they’re not enough, a more progressive method of bot detection is necessary: challenge-based. This approach equips a website with proactive components that allow bot security to analyze how traffic interacts with different technologies and tests in order to determine what exactly that traffic is: human, good bot or bad bot. This approach can catch even the most advanced bots.

A fourth strategy is, by default, the best: it’s all of the above. Using a multilateral approach that implements the level of detection necessary for each individual visitor is both the most efficient and effective approach, and bot detection that doesn’t involve static, behavioral and challenge-based approaches shouldn’t even be considered.

Not all bots have earned the bad reputation associated with them, but in many cases, a bot by any other name will still smell as not-sweet and cause DDoS attacks and other trouble that can lead to long-term losses. For these sometimes great and sometimes awful internetters, only the most thorough and intelligent detection will do.

Note: Select outbound links may include affiliate tracking codes and AndroidGuys may receive compensation for purchases. Read our policy. As an Amazon Associate we earn from qualifying purchases.