Data centres in the crosshairs

Data centres in the crosshairs

Every day, attackers conspire to take down applications and steal data, leaving your data centre infrastructure in the crosshairs. Storing the most valuable and most visible assets in your organisation – your web, DNS, database, and email servers – data centres have become the number one target of cyber criminals, hacktivists and state-sponsored attackers. Below, Glen Ogden, Regional Sales Director, Middle East at A10 Networks, describes the top five most dangerous threats to your data centre:

1)    DDoS Attacks

Servers are a prime target for Distributed Denial of Service (DDoS) attacks and, increasingly, they are an attack weapon in the escalating war to disrupt and disable essential Internet services. While web servers have been at the receiving end of DDoS attacks for years, attackers are now exploiting web application vulnerabilities to turn web servers into “bots.” Once attackers have drafted unwitting web servers into their virtual army, they use these servers to attack other websites. By leveraging web, DNS and NTP servers, attackers can amplify the size and the strength of DDoS attacks.

While servers will never replace traditional PC-based botnets, their greater compute capacity and bandwidth enable them to carry out destructive attacks, where one server could equal the attack power of hundreds of PCs. With more and more DDoS attacks launched from servers, it’s not surprising that the size of DDoS attacks have grown sharply in the past few years. At the current trajectory, DDoS attacks could reach 37 Mpps in 2014 and 175 Mpps in 2015. Even if packet rates do not rise as sharply, DDoS attacks will be powerful enough to incapacitate most standard networking equipment. Every organisation should build up defences to fend off the next DDoS attack.

According to a study conducted by IDG Research Services, it takes an average of ten hours before a company can even begin to resolve a DDoS attack. On average, a DDoS attack isn’t detected until 4.5 hours after it starts. And they need a further 4.9 hours before mitigation can commence. With outage costs averaging $100k per hour, that bring us to $1 million cost for an Internet reliant company.

2)    Web Application Attacks

When cyber criminals and hacktivists aren’t busy taking down websites with DDoS attacks, they are launching web attacks like SQL injection, cross-site scripting (XSS) and cross-site request forgery (CSRF). They strive to break into applications and steal data for profit. And increasingly, attackers target vulnerable web servers and install malicious code in order to transform them into DDoS attack sources. CMS applications aren’t the only applications at risk. In fact, 96% of all applications currently have or have had vulnerabilities, and the median number of vulnerabilities per application was 14 in 2013.

Today’s most dangerous application threats, like SQL injection and cross-site scripting, aren’t new but they are still easy to perform and they are lethally effective. Attack tools like the Havij SQL injection tool enable hackers to automate their attack processes and quickly exploit vulnerabilities. The recent wave of web attacks on CMS applications has also revealed a gaping hole in the age-old strategy to lock down applications by writing secure code. Because CMS applications are usually developed by third parties and not internally, organisations can’t rely on secure coding processes to protect these applications. With 35% of all breaches caused by web attacks in 2013, organisations, now more than ever, need a proactive defence to block web attacks and “virtually patch” vulnerabilities.

3)    DNS Infrastructure: Attack target and collateral damage

DNS servers have gained the dubious distinction of becoming a top attack target for two reasons. First, taking DNS servers offline is an easy way for attackers to keep thousands or millions of Internet subscribers from accessing the Internet. If attackers incapacitate an ISP’s DNS servers, they can prevent the ISP’s subscribers from resolving domain names, visiting websites, sending email and using other vital Internet services. DNS attacks have brought down service providers’ DNS services for hours, even days, and in extreme cases have led to class-action lawsuits by subscribers.

Second, attackers can exploit DNS servers to amplify DDoS attacks. In the case of DNS reflection attacks, attackers spoof, or impersonate, the IP address of their real attack target. They send queries that instruct the DNS server to recursively query many DNS servers or to send large responses to the victim. As a result, powerful DNS servers drown the victim’s network with DNS traffic. Even when DNS servers are not the ultimate target of the attack, they can still suffer downtime and outages as the result of a DNS reflection attack. With DNS accounting for 8.95% of all DDoS attacks, organisations that host DNS servers must protect their DNS infrastructure.

4)    SSL-induced security blind spots

To prevent the continuous stream of malware and intrusions in their networks, enterprises need to inspect incoming and outgoing traffic for threats. Unfortunately, attackers are increasingly turning to encryption to evade detection. With more and more applications supporting SSL – in fact, over 40% of applications can use SSL or change ports– SSL encryption represents not just a chink in enterprises’ proverbial armour, but an enormous crater that malicious actors can exploit. While many firewalls, intrusion prevention and threat prevention products can decrypt SSL traffic, they can’t keep pace with growing SSL encryption demands.

For end-to-end security, organisations need to inspect outbound SSL traffic originating from internal users, and inbound SSL traffic originating from external users to corporate-owned application servers to eliminate the blind spot in corporate defences. In its report, SSL Performance Problems, NSS Labs found that eight leading next-generation firewall vendors experienced significant performance degradation when decrypting 2048-bit encrypted traffic. This led NSS Labs to assert it had “concerns for the viability of SSL inspection in enterprise networks without the use of dedicated SSL decryption devices.”If the gamut of security devices can’t keep up with growing SSL encryption demands, then organisations need a high-powered solution to intercept and decrypt SSL traffic, offloading intensive SSL processing from security devices and servers.

5)    Brute force and weak authentication

Applications often use authentication to verify the identity of users. With authentication, application owners can restrict access to authorised users and they can customise content based on user identity. Unfortunately, many application owners only enforce single-factor, password-based authentication. With weak single-factor authentication, application owners are exposed to a host of threats, from simple password guessing and stolen credentials to highly automated brute force attacks from password cracking tools.

Analysis from large-scale breaches of passwords, like the 38 million passwords exposed in the Adobe hack, reveal the limitations of simple, single-factor authentication. Researchers have discovered that many users select the same, common passwords, like “123456” and “password.” In fact, 50% of password records in the RockYou breach included names, dictionary words, or trivial passwords based on adjacent keyboard keys, and the 100 most common passwords account for 40% of all passwords chosen by users.

Besides the risk of simple passwords, many users select the same password for multiple accounts. Unfortunately, when one of these accounts is compromised as part of a data breach, all other accounts sharing the same password are at risk. Within hours of a breach, hackers will crack stolen password lists – even password hashes – and use them to break into other online accounts. Two-factor authentication can drastically reduce the risk of password cracking. Combining passwords with out of- band authentication such as SMS messages to mobile devices or with hardware tokens or software tokens greatly decreases the risk of brute force or password cracking. In addition, user context, such as a user’s browser and operating system or a user’s geographic location, can help identify fraudulent activity. Application owners can build advanced rules to identify high-risk users or password cracking tools to safeguard user accounts. For many organisations, simply rolling out and managing authentication across many different web applications can be daunting. Setting up client authentication schemes for dozens of applications entails costly and time consuming development work. As a result, organisations need an integrated solution that can centrally manage authentication services and can block users with repeated failed login attempts.

Protecting your Servers and applications from the Top Five Data Centre threats

To shield data centre infrastructure from attack, organisations need a solution that can mitigate a multitude of threat vectors and still deliver unmatched performance. Application Delivery Controllers (ADCs) can help organisations safeguard their data centre infrastructure. Deployed in the heart of the data centre, ADCs can block attacks, intercept and inspect encrypted traffic and prevent unauthorised access to applications. With malicious users increasingly setting their sights on data centre servers, ADCs can provide best-of-breed protection against data centre security threats. Organisations should carefully evaluate the security features of ADCs to make sure they effectively mitigate data centre risks.

 

Browse our latest issue

Intelligent CIO Middle East

View Magazine Archive