Tag Archives: development

E-toll site weathers denial of service (DDoS) attack

Sanral’s e-toll Web site suffered a denial of service (DoS) attack on Friday, according to the agency. “Some users complained of slow site performance, and our service provider traced the problem to a denial of service attack of international origin,” said Sanral spokesman Vusi Mona. No further details of the attack were available, but Alex van Niekerk, project manager for the Gauteng Freeway Improvement Project, said the site has come under repeated attack since going live, but suffered only minor performance degradation. DoS attacks, particularly distributed denial of service (DDoS) attacks, are a popular technique used to knock sites offline, overwhelming them with traffic until they are unable to service their clients. Activist group Anonymous frequently uses DDoS to attack targets, using its wide base of supporters to generate traffic. Botnets often launch DDoS attacks from their installed base of zombie PCs. And last year, anti-spam service Spamhaus suffered one of the largest DDoS attacks in history, with incoming traffic peaking at 300Gbps, launched by a Dutch Web host known for harbouring spammers. Sanral’s Web site has been the target of several attacks lately, including a hack which may have leaked personal information, a flaw which allowed motorists to be tracked in real-time, and a session fixation attack which allowed login sessions to be hijacked. Source: http://www.itweb.co.za/index.php?option=com_content&view=article&id=70192:e-toll-site-weathers-denial-of-service-attack

See more here:
E-toll site weathers denial of service (DDoS) attack

DDoS attacks get more complex – are networks prepared?

The threat of cyber attacks from both external and internal sources is growing daily. A denial of service, or DoS, attack is one of the most common. DoS have plagued defense, civilian and commercial networks over the years, but the way they are carried out is growing in complexity. If you thought your systems were engineered to defend against a DoS attack, you may want to take another look.   Denial of service attack evolution A denial of service attack is a battle for computing resources between legitimate requests that a network and application infrastructure were designed for and illegitimate requests coming in solely to hinder the service provided or shut down the service altogether.   The first DoS attacks were primarily aimed at Layer 3 or Layer 4 of the OSI model and were designed to consume all available bandwidth, crash the system being attacked, or consume all of the available memory, connections or processing power. Some examples of these types of attacks are the Ping of Death, Teardrop, SYN flood and ICMP flood. As operating system developers, hardware vendors and network architects began to mitigate these attacks, attackers have had to adapt and discover new methods. This has led to an increase in complexity and diversity in the attacks that have been used.   Since DoS attacks require a high volume of traffic — typically more than a single machine can generate — attackers may use a botnet, which is a network of computers that are under the control of the attacker. These devices are likely to have been subverted through malicious means. This type of DoS, called a distributed denial of service (DDoS), is harder to defend against because the traffic likely will be coming from many directions.   While the goal of newer DoS attacks is the same as older attacks, the newer attacks are much more likely to be an application layer attack launched against higher level protocols such as HTTP or the Domain Name System. Application layer attacks are a natural progression for several reasons: 1) lower level attacks were well known and system architects knew how to defend against them; 2) few mechanisms, if any, were available to defend against these types of attacks; and 3) data at a higher layer is much more expensive to process, thus utilizing more computing resources.   As attacks go up the OSI stack and deeper into the application, they generally become harder to detect. This equates to these attacks being more expensive, in terms of computing resources, to defend against. If the attack is more expensive to defend against, it is more likely to cause a denial of service. More recently, attackers have been combining several DDoS attack types. For instance, an L3/L4 attack, in combination with an application layer attack, is referred to as diverse distributed denial of service or 3DoS. Internet and bandwidth growth impact DoS   Back in the mid- to late 1990s, fewer computers existed on the Internet. Connections to the Internet and other networks were smaller and not much existed in the way of security awareness. Attackers generally had less bandwidth to the Internet, but so did organizations.   Fast forward to the present and it’s not uncommon for a home connection to have 100 megabits per second of available bandwidth to the Internet. These faster connections give attackers the ability to send more data during an attack from a single device. The Internet has also become more sensitive to privacy and security, which has lead to encryption technologies such as Secure Sockets Layer/Transport Layer Security to encrypt data transmitted across a network. While the data can be transported with confidence, the trade-off is that encrypted traffic requires extra processing power, which means a device encrypting traffic typically will be under a greater load and, therefore, will be unable to process as many requests, leaving the device more susceptible to a DoS attack.   Protection against DoS attacks   As mentioned previously, DoS attacks are not simply a network issue; they are an issue for the entire enterprise. When building or upgrading an infrastructure, architects should consider current traffic and future growth. They should also have resources in place to anticipate having a DoS attack launched against their infrastructure, thereby creating a more resilient infrastructure.   A more resilient infrastructure does not always mean buying bigger iron. Resiliency and higher availability can be achieved by spreading the load across multiple devices using dedicated hardware Application Delivery Controllers (ADCs). Hardware ADCs evenly distribute the load across all types of devices, thus providing a more resilient infrastructure and also offer many offloading capabilities for technologies such as SSL and compression.   When choosing a device, architects should consider whether the device offloads some processing to dedicated hardware. When a typical server is purchased, it has a general purpose processor to handle all computing tasks. More specialized hardware such as firewalls and Active Directory Certificates offer dedicated hardware for protection against SYN floods and SSL offload. This typically allows for such devices to handle exponentially more traffic, which in turn means they are more capable to thwart an attack. Since attacks are spread across multiple levels of the OSI model, tiered protection is needed all the way from the network up to the application design. This typically equates to L3/L4 firewalls being close to the edge that they are protecting against some of the more traditional DoS attacks and more specialized defense mechanism for application layer traffic such as Web Application Firewalls (WAFs) to protect Web applications. WAFs can be a vital ally in protecting a Web infrastructure by defending against various types of malicious attacks, including DoS. As such, WAFs fill in an important void in Web application intelligence left behind by L3/L4 firewalls.   As demonstrated, many types of DoS attacks are possible and can be generated from many different angles. DoS attacks will continue to evolve at the same — often uncomfortably fast — rate as our use of technology. Understanding how these two evolutions are tied together will help network and application architects be vigilant and better weigh the options at their disposal to protect their infrastructure. Source: http://defensesystems.com/Articles/2013/12/19/DOS-attacks-complexity.aspx?admgarea=DS&Page=3

Continue reading here:
DDoS attacks get more complex – are networks prepared?

Mobile devices increasingly used to launch sophisticated DDoS attacks

DDoS attacks still plague businesses worldwide, and cyber criminals are increasingly using mobile devices to launch attacks The threat of distributed denial of service (DDoS) attacks against enterprise users from mobile applications is increasing as more users go mobile, according to DDoS security company Prolexic. Cyber criminals are finding mobile devices can make for a powerful attack tool – and surprisingly easy to use. “Mobile devices add another layer of complexity,” said Stuart Scholly, Prolexic President, in a press statement. “Because mobile networks use super proxies, you cannot simply use a hardware appliance to block source IP addresses as it will also block legitimate traffic. Effective DDoS mitigation requires an additional level of fingerprinting and human expertise so specific blocking signatures can be developed on-the-fly and applied in real-time.”   DDoS attacks can lead to website and server downtime, interruption in day-to-day business operations, and lead to lost revenue and wasted manpower. Prolexic discovered a 26 percent increase in DDoS attacks from Q4 2012 to Q4 2013, with a significant number of advanced DDoS attack weapons. Source: http://www.tweaktown.com/news/34862/mobile-devices-increasingly-used-to-launch-sophisticated-ddos-attacks/index.html

Read more here:
Mobile devices increasingly used to launch sophisticated DDoS attacks

US-CERT warns of NTP Amplification attacks

US-CERT has issued an advisory that warns enterprises about distributed denial of service attacks flooding networks with massive amounts of UDP traffic using publicly available network time protocol (NTP) servers. Known as NTP amplification attacks, hackers are exploiting something known as the monlist feature in NTP servers, also known as MON_GETLIST, which returns the IP address of the last 600 machines interacting with an NTP server. Monlists is a classic set-and-forget feature and is used generally to sync clocks between servers and computers. The protocol is vulnerable to hackers making forged REQ_MON_GETLIST requests enabling traffic amplification. “This response is much bigger than the request sent making it ideal for an amplification attack,” said John Graham-Cumming of Cloudflare. According to US-CERT, the MON_GETLIST command allows admins to query NTP servers for traffic counts. Attackers are sending this command to vulnerable NTP servers with the source address spoofed as the victim. “Due to the spoofed source address, when the NTP server sends the response it is sent instead to the victim. Because the size of the response is typically considerably larger than the request, the attacker is able to amplify the volume of traffic directed at the victim,” the US-CERT advisory says. “Additionally, because the responses are legitimate data coming from valid servers, it is especially difficult to block these types of attacks.” To mitigate these attacks, US-CERT advises disabling the monlist or upgrade to NTP version 4.2.7, which also disables monlist. NTP amplification attacks have been blamed for recent DDoS attacks against popular online games such as League of Legends, Battle.net and others. Ars Technica today reported that the gaming servers were hit with up to 100 Gbps of UDP traffic. Similar traffic amounts were used to take down American banks and financial institutions last year in allegedly politically motivated attacks. “Unfortunately, the simple UDP-based NTP protocol is prone to amplification attacks because it will reply to a packet with a spoofed source IP address and because at least one of its built-in commands will send a long reply to a short request,” Graham-Cumming said. “That makes it ideal as a DDoS tool.” Graham-Cumming added that an attacker who retrieves a list of open NTP servers, which can be located online using available Metasploit or Nmap modules that will find NTP servers that support monlist. Graham-Cumming demonstrated an example of the type of amplification possible in such an attack. He used the MON_GETLIST command on a NTP server, sending a request packet 234 bytes long. He said the response was split across 10 packets and was 4,460 bytes long. “That’s an amplification factor of 19x and because the response is sent in many packets an attack using this would consume a large amount of bandwidth and have a high packet rate,” Graham-Cumming said. “This particular NTP server only had 55 addresses to tell me about. Each response packet contains 6 addresses (with one short packet at the end), so a busy server that responded with the maximum 600 addresses would send 100 packets for a total of over 48k in response to just 234 bytes. That’s an amplification factor of 206x!” Source: http://threatpost.com/us-cert-warns-of-ntp-amplification-attacks/103573

View the original here:
US-CERT warns of NTP Amplification attacks

Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

Dropbox website went offline last night with a hacking collecting calling itself The 1775 Sec claiming responsibility of the attack on the cloud storage company’s website. The 1775 Sec took to twitter just a few moments before Dropbox went down on Friday night claiming that they were responsible. “BREAKING NEWS: We have just compromised the @Dropbox Website http://www.dropbox.com #hacked #compromised” tweeted The 1775 Sec. This tweet was followed by a another one wherein the group claimed that it was giving Dropbox the time to fix their vulnerabilities and if they fail to do so, they should expect a Database leak. The group claimed that the hack was in honour of Aaron Swartz. Dropbox’s status page at the time acknowledged that there was a downtime and that they were ‘experiencing issues’. The hackers then revealed that their claims of a Database leak was a hoax. “Laughing our asses off: We DDoS attacked #DropBox. The site was down how exactly were we suppose to get the Database? Lulz” tweeted The 1775 Sec. The group claimed that they only launched a DDoS attack and didn’t breach Dropbox security and didn’t have access to Dropbox user data. Dropbox claimed that its website was down because of issues during “routine maintenance” rather than a malicious attack. In a statement Dropbox said “We have identified the cause, which was the result of an issue that arose during routine internal maintenance, and are working to fix this as soon as possible… We apologize for any inconvenience.” Just over an hour ago, Dropbox said that its site was back up. “Dropbox site is back up! Claims of leaked user info are a hoax. The outage was caused during internal maintenance. Thanks for your patience!” read the tweet from Dropbox. Source: http://www.techienews.co.uk/974664/dropbox-hits-ddos-user-data-safe-1775-sec-claims-responsibility/

Read More:
Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

Could Cross-site scripting (XSS) be the chink in your website’s armour?

Sean Power, security operations manager for DOSarrest Internet Security , gives his advice on how businesses that rely heavily on their web presences can avoid (inadvertently) making their users susceptible to malicious attackers. Cross-site scripting, otherwise commonly known as XSS, is a popular attack vector and gets its fair share of the limelight in the press, but why is it such a problem and how is it caused? Essentially, XSS is a code vulnerability in a website that allows an attacker to inject malicious client-side scripts into a web page viewed by a visitor. When you visit a site that has been compromised by a XSS attack, you will be inadvertently executing the attacker’s program in addition to viewing the website. This code could be downloading malware, copying your personal information, or using your computer to perpetuate further attacks. Of course, most people don’t look at the scripting details on the website, but with popular wikis and web 2.0 content that is constantly updated and changed, it’s important to understand the ramifications from a security stand point. In order for modern websites to be interactive, they require a high degree of input from the user, this can be a place for attackers to inject content that will download malware to a visitor or enslave their computer, and therefore it is hard to monitor an ‘open’ area of the website and continually update and review their websites. XSS code can appear on the web page, in banner ads, even as part of the URL; and if it’s a site that is visited regularly, users will as good as submit themselves to the attacker.  In addition, as XSS is code that runs on the client side, it has access to anything that the JavaScript has access to on the browser, such as cookies that store information about browsing history. One of the real concerns about XSS is that by downloading script on a client-side computer, that endpoint can become enslaved into a botnet, or group of computers that have been infected with malware in order to allow a third party to control them, and used to participate in denial of service attacks. Users might not even be aware that they are part of an attack. In a recent case, we identified how a popular denial of service engine called ‘JSLOIC’ was used as script in a popular website, making any visitor an unwitting participant in a denial of service attack against a third party for as long as that browser window remained open. The range of what can be accomplished is huge- malware can be inserted into a legitimate website, turning it into a watering hole that can infect a visitor’s computer; and this can impact anyone. Once the XSS is put into a website, then the user becomes a victim and the attacker has is all of information that the browser has. In terms of preventing it; firstly, the hole in the website that has been exploited has to be closed.  The main tactic to prevent XSS code running on your website is to make sure you are ‘locking all the doors’ and reviewing your website code regularly to remove bugs and any vulnerabilities. If you are doing it properly, it should be a continual process. If a website has malware on it due to the owner not reviewing it regularly, then attackers will be able alter the malicious code to dominate the page and infect more visitors. You can limit the chances of getting malicious code on your website by routinely auditing the website for unintended JavaScript inclusions. But with XSS, especially non-persistent XSS, the best thing is to validate all data coming in, don’t include any supporting language and make sure what is coming in is sanitised, or checked for malicious code. This is especially true for parts of your website that get regular updates, like comment sections. It is not enough to just assume that because it clean before, new updates will also be also be clear. Even if you are following proper security coding and go through code reviews, websites are sometimes up for six months with no changes made, that is why vulnerability testing is important as new bugs come up. Remember, HTTP and HTML are full of potential vulnerabilities as the HTML protocol was written in the 1960s; it was never imagined it to be what it has become. So when writing website code, if you do not consider SQL Injection or XSS, then you will write a website full of holes. Top three tips: – Review your website and sanitise your code regularly to ensure there is no malicious code or holes where code can be inserted. – Consider not allowing comments to host external links, or even approve those links before they are published to prevent  code from being inserted easily. – View your web traffic in and out of your website for signs of unusual behaviour. Source: http://www.information-age.com/technology/security/123457575/could-xss-be-the-chink-in-your-website-s-armour-

See original article:
Could Cross-site scripting (XSS) be the chink in your website’s armour?

DDoS Attacks: What They Are, and How to Defend Against Them

You may have heard of a DDoS (distributed denial-of-service) attack in the news as a method used by malicious hackers to attack a website. It’s possible you’ve even experienced the effects of a DDoS attack yourself. If you host a website or other online service, being aware of the dangers of a DDoS attack can help you prevent one, or mitigate the damage they can incur. Here’s a brief explanation of what a DDoS attack is, what it accomplishes and how to avoid one. How does a DDoS attack work? Denial of service through server flooding can be thought of as simply filling up a pipe with enough material to prevent anything else from getting through. Denial of service may occur unintentionally if a server receives more traffic than it was designed to handle. This happens frequently, such as when a low-trafficked website suddenly becomes popular. In this case, the server is still functioning, and is not damaged, but is unreachable from the Internet. It’s been effectively knocked offline, and will be until the DDoS attack either stops or is outgunned by more servers being brought online. Malicious denial of service involves deliberately flooding a server with traffic. The easiest way to do so is to distribute the attacking computers among hundreds, even thousands of computers, which simultaneously bombard the target server with (often useless) requests for information. Think of multiple pipes from various locations eventually connecting into one large pipe, and massive volumes of material eventually colliding from the origin points into the main pipe. While the electronic connections that make up the Internet are not technically “pipes,” there is a limit to the amount of data that can be transferred through any given network.  Put enough in there, and a server’s pipes will be clogged. Cybercriminals use large systems of “zombie” computers, or computers infected with malware that allow a central controller to use them, in DDoS attacks. Hacktivist groups like Anonymous, on the other hand, recruit volunteers who install software on their own machines to take part in DDoS attacks. Anonymous has used DDoS attacks against the websites of credit-card companies, dictatorial foreign governments and even the CIA, FBI and U.S. Department of Justice. What does a DDoS attack accomplish? Unlike other forms of malicious computer activity, there is usually no immediate or direct gain for the attacker. The primary goal of a DDoS attack is simply to disrupt a service. A DDoS attack will not in itself allow hackers to access any secure information on its own. There is no network penetration or database breach involved. A DDoS attack can result in a loss of income for a company that does business online. Most of the large online retailers and social networks have hardened their servers to resist DDoS attacks. DDoS attacks by Anonymous and other hacktivist groups are often intended to be a form of protest. In January 2012, attacks on several government agencies and recording labels were staged by hacktivist groups as a form of protest against the Stop Online Piracy Act (SOPA) and the seizure of the file-sharing site MegaUpload by the FBI. Over the past decade, hundreds of DDos attacks have been performed by independent activists, political groups and even government agencies. How can you avoid or mitigate a DDoS attack? Unfortunately, there is little that can be done to avoid becoming the victim of a DDoS attack. Unlike other attacks, it is a brute-force strike that uses a public utility — the Internet itself — to overwhelm a system. Anti-virus software and filtering tools such as firewalls will not stop the effectiveness of the attack. The primary method of dealing with these attacks from the perspective of a host is to increase the capability of the system. Load-balancing tools can distribute requests among many servers scattered across a wide geographical area, and as the system grows to handle more requests, the attackers will need to use a stronger attack to overwhelm it. Methods to limit the amount of traffic allowed to and from the server can be enabled in some routers and switches, and some responsive systems can disconnect a network from the Internet before the attack brings the entire system down. The latter method will still result in the network being inaccessible from the Internet, but will generally result in a faster return to service. Source: http://www.tomsguide.com/us/ddos-attack-definition,news-18079.html

Read More:
DDoS Attacks: What They Are, and How to Defend Against Them

Steam, Blizzard and EA hit by DDoS attacks

There’s something about the new year that gets hackers all excited as the DDoS attacks continue. The last major attack was on 31 December with DERP unleashing their DDoS on World of Tanks, EA, Blizzard, League of Legends and DOTA 2.It looks like the hangovers have worn off as once again they hit EA and Battlefield 4 servers. EA hopped on the case with a response. In what may have been a response to that, we have no idea what’s behind their thinking with all this, another group decided Steam should be the target. We are still seeing reports that Steam is still having issues despite the attack apparently having stopped. And then it was on to BattleNet… All this is being done for shits and giggles but really achieves nothing other than annoy gamers and cause some temporary headaches for server admins. The novelty will probably wear off in a few days but as the individuals involved are being encouraged by Twitter followers expect more outages. Source: http://www.incgamers.com/2014/01/steam-blizzard-ea-hit-ddos-attacks

Continue Reading:
Steam, Blizzard and EA hit by DDoS attacks

Attackers Wage Network Time Protocol-Based DDoS Attacks

Attackers have begun exploiting an oft-forgotten network protocol in a new spin on distributed denial-of-service (DDoS) attacks, as researchers spotted a spike in so-called NTP reflection attacks this month. The Network Time Protocol, or NTP, syncs time between machines on the network, and runs over port 123 UDP. It’s typically configured once by network administrators and often is not updated, according to Symantec, which discovered a major jump in attacks via the protocol over the past few weeks. “NTP is one of those set-it-and-forget-it protocols that is configured once and most network administrators don’t worry about it after that. Unfortunately, that means it is also not a service that is upgraded often, leaving it vulnerable to these reflection attacks,” says Allan Liska, a Symantec researcher in blog post last week. Attackers appear to be employing NTP for DDoSing similar to the way DNS is being abused in such attacks. They transmit small spoofed packets requesting a large amount of data sent to the DDoS target’s IP address. According to Symantec, it’s all about abusing the so-called “monlist” command in an older version of NTP. Monlist returns a list of the last 600 hosts that have connected to the server. “For attackers the monlist query is a great reconnaissance tool. For a localized NTP server it can help to build a network profile. However, as a DDoS tool, it is even better because a small query can redirect megabytes worth of traffic,” Liska explains in the post. Monlist modules can be found in NMAP as well as in Metasploit, for example. Metasploit includes monlist DDoS exploit module. The spike in NTP reflection attacks occurred mainly in mid-December, with close to 15,000 IPs affected, and dropped off significantly after December 23, according to Symantec’s data,. Symantec recommends that organizations update their NTP implementations to version 4.2.7, which does not use the monlist command. Another option is to disable access to monlist in older versions of NTP. “By disabling monlist, or upgrading so the command is no longer there, not only are you protecting your network from unwanted reconnaissance, but you are also protecting your network from inadvertently being used in a DDoS attack,” Liska says. Source: http://www.darkreading.com/attacks-breaches/attackers-wage-network-time-protocol-bas/240165063

Read the article:
Attackers Wage Network Time Protocol-Based DDoS Attacks

NatWest hit by Distributed Denial of Service (DDoS) Attack

NatWest has been hit by a ‘cyber attack’, leaving customers unable to access online accounts. The bank’s online banking service was disrupted after it was deliberately bombarded with internet traffic. Twitter users tweeted to say they could not access their bank accounts to pay bills or transfer money. @TomGilchrist wrote: “Do other banks computer systems/services go down as much as NatWest? I assume not. Time to move banks I think.” @AleexReid tweeted: “Just joined Santander. Fed up with NatWest. Another computer failure tonight. #welldone.” A NatWest spokesperson said: “Due to a surge in internet traffic deliberately directed at the NatWest website, some of our customers experienced difficulties accessing our customer web sites this evening. “This deliberate surge of traffic is commonly known as a distributed denial of service (DDoS) attack. “We have taken the appropriate action to restore the affected web sites. At no time was there any risk to customers. We apologise for the inconvenience caused.” At the beginning of December  all of RBS and NatWest’s systems went down for three hours on one of the busiest shopping days of the year. The group chief executive Ross McEwan described that glitch as “unacceptable” and added: “For decades, RBS failed to invest properly in its systems. “We need to put our customers’ needs at the centre of all we do. It will take time, but we are investing heavily in building IT systems our customers can rely on.” RBS and NatWest also came under fire in March after a “hardware fault” meant customers were unable to use their online accounts or withdraw cash for several hours. A major computer issue in June last year saw payments go awry, wages appear to go missing and home purchases and holidays interrupted for several weeks, costing the group £175m in compensation. This latest problem is the fourth time in 18 months RBS and NatWest customers have reported problems with the banks’ services. Source: http://news.sky.com/story/1187653/natwest-hit-by-fourth-online-banking-glitch

Continue Reading:
NatWest hit by Distributed Denial of Service (DDoS) Attack