Tag Archives: copyright

DDoS attack takes Deezer offline

Streaming music service Deezer experienced several hours of downtime this weekend just gone, thanks, apparently, to one of those Distributed Denial Of Service attacks that were so fashionable a few years back. The source of the DDoS isn’t clear, but the streaming service says its servers were first targeted on Friday, with no real impact, but that a high level attack occurred on Saturday afternoon, taking the service offline on all platforms. DDoS attacks swamp a server with traffic so that it crashes under the weight. Deezer bosses say that while the DDoS was enough to force their service offline, no data was accessed by the attackers. The company’s IT experts identified the course of the problem and put in place measures to limit the impact of the DDoS, so that even though the server attack continued through Sunday, the service has been back online since just after midnight Saturday night. Deezer Founder Daniel Marhely said yesterday in a message to users: “As soon as we became aware of the issue we launched an investigation. We assigned ten staff members to the incident and worked to get the service back up, fuelled by a winning mix of adrenalin and pizza. The method of attack was quickly identified and actions were taken to minimise the impact on the service. We regularly adapted solutions to the changing methods of attack. New protective measures (filters to distinguish between normal incoming traffic and flooding traffic from the attack) were set up by our team, and the attacks finally stopped around 00.22 GMT”. Stressing that no user data had leaked during the attack, the Deezer man went on: “We apologise for any inconvenience. We’re continuing to investigate and are working hard on measures to counter this type of attack in the future. We have taken steps to strengthen our servers and security systems and will continue to do so. Thanks for your patience. We really appreciated your kind messages and encouraging tweets throughout the weekend”. Source: http://www.completemusicupdate.com/article/ddos-attack-takes-deezer-offline/

Original post:
DDoS attack takes Deezer offline

Facing a criminal DDoS attack

Distributed denial of service (DDoS) attacks attempt to flood a server with so many requests that they render a website useless. The effects are many, from lost customer conversions and revenue to punished SEO ranking and blacklisting. The reality is that DDoS attack methods and the criminals behind them are evolving. Understanding this evolution is key to making sure companies that place any sort of importance on their websites stay protected. The type and style of attack is changing – there are headless browsers and application layer attacks, and DDoS attacks as cover for more sinister cyberattacks. Every reseller with security in the portfolio needs to understand that DDoS is not a static problem that can be dealt with and then ignored. It changes, and the tactics for defending against this type of attack need to advance even faster. Better general awareness about DDoS attacks has forced attackers to develop new ways to get around the basic defences. Media attention on high-profile DDoS attacks attracts activists with a message. Groups try to outdo one another in a bid for attention. A growing variety of coding practices, web platforms and web design features have multiplied the number of variables which can result in application exploits, rendering a website useless. With more access to high-CPU devices available through the cloud and dedicated hosting, DDoS attackers can now use those CPUs to run more sophisticated attacks. For these reasons, we are seeing more sophistication in attack style, meaning there is less volume and attackers are targeting very specific vulnerabilities in a website by doing their homework to make sure they target the weakest points. One of the stealthiest methods is headless browsers. These can be a clever way for cybercriminals to get around standard DDoS protection and masquerade as legitimate web traffic. The kit itself is used for programmers to test their websites, so to all intents and purposes, it is a legitimate browser web kit, just modified to run a series of queries and target basic web user interfaces. Detection is difficult and stopping a headless browser DDoS attack can take a trained professional to spot and remediate it. Importantly, with headless browsers Javascript and Captcha can be processed and can jump through the hoops, as it were, of the website, as it was designed for testing. This will be a big problem for more traditional DDoS protection, such as box solutions. What will be most effective here is real-time support, where there is a human involved who can develop some rule sets to determine what is going on and implement the modules within seconds. Application layer attacks are also becoming more prevalent, although you might not even notice them, if you don’t know what you are looking for. Attackers are getting better at reconnaissance and research, facilitating smarter attacks that can keep the volume low and under the radar, meanwhile killing the site in the background and fooling IT into spending time on the wrong part of the site when it is down. It is these application attacks and headless browser attacks that we see as the biggest concern for the future. I can only surmise that media hype is fuelling the focus on volumetric DDoS attacks, which is where the industry seems to be concentrating to meet customer expectations. Actually there is a rise in application attacks and we should be educating companies about these threats, as they indicate serious consequences for businesses that place any sort of importance on their websites. Jag Bains is chief technology officer of DOSarrest Source: http://www.channelweb.co.uk/crn-uk/opinion/2348218/facing-a-criminal-denial-of-service

See the original post:
Facing a criminal DDoS attack

Get Safe Online suffers ‘DDoS’ attack

“We’re looking at what we can do to make sure this won’t happen again. We’re sorry. I’ve had no sleep for two days” – Tony Neate, GSO chief executive During the first hour after the National Crime Agency (NCA) advised Internet users to check out the Get Safe Online web site in the wake of the Gameover Zeus/CryptoLocker botnet takedown, the site suffered what some have described as an unintended DDoS attack. The reality for most users who heeded the 2pm Monday call was that site either froze as they were trying to access it, or simply became inaccessible as too many people overloaded the site server’s access facility. Get Safe Online (GSO) has blamed the effective outage as simply down to the fact that two many people were trying to access the site at the same time. As a result, the servers could not complete the IP requests, resulting in an outage lasting two days, until late yesterday. This was despite the site operators moving swiftly to quadruple site capacity. Tony Neate, GSO’s chief executive – the man who set up the company back in 2006 after a 30-year career in the Police – told the BBC newswire that it is important for people to realise that this has been a learning curve for him and his team. “We’re looking at what we can do to make sure this won’t happen again. We’re sorry. I’ve had no sleep for two days,” he said. GSO is a jointly funded operation supported by the UK government and a variety of commercial sponsors, including Barclays, NatWest, Kaspersky Lab and PayPal. The idea behind the site is that it is a one-stop shop for cybersecurity safety for individuals and small businesses. Sean Power, security operations manager with DOSarrest, the DDoS remediation specialist, said that the overload of GSO is a great example of the `Slashdot effect’ or the `Reddit hug of death.’ This, he explained, is where a site’s sudden popularity – usually initiated by reference in a popular community site – is more than the infrastructure can handle. “This is akin to a small cart vendor opening a free money stall in Times Square,” he said, adding that the nett effect is a sudden denial of service that is both unintentional and unexpected. It is, says Power, vital that a denial-of-service incident response team is able to tell the difference between a malicious attack and a sudden dramatic increase in popularity, because you will want to treat the two situations very differently. “For this reason many firms elect to employ a seasoned denial-of-service mitigation company who have the expertise to make this distinction – and act accordingly to ensure that the site is up and available to all legitimate visitors,” he said.” “One of the added advantages of having a good distributed-denial-of-service protection provider is their ability to handle extremely large legitimate requests, whereby the customer gets to leverage their caching and distributed architecture,” he added. Source: http://www.scmagazineuk.com/get-safe-online-suffers-ddos-attack/article/351148/

Continue reading here:
Get Safe Online suffers ‘DDoS’ attack

Anonymous takes aim at World Cup sponsors

Hactivist group Anonymous has announced plans to launch a DDoS attack on the sponsors of the football World Cup, which opens in Brazil later this month. Reuters – interviewing Che Commodore, a masked member of Anonymous – says that preparations for the distributed denial of service attack are now under way. “We have a plan of attack. We have already conducted late-night tests to see which of the sites are more vulnerable – this time we are targeting the sponsors of the World Cup,” he said. The main sponsors of the World Cup include Adidas, Budweiser, Coca Cola and Emirates Airlines. Reuters quotes Che Commodore as claiming that a test attack earlier this week allowed Anonymous to break into the Brazilian Foreign Ministry’s server and access dozens of confidential documents, as well as steal several email accounts. The newswire adds that in response to the claims, a Foreign Ministry official told Reuters that 55 email accounts were accessed and the only documents that were obtained were attached to emails and those from the ministry’s internal document archive. Can Anonymous carry out its threat? Tim Keanini, CTO with Lancope, says that, regardless of threat profile, an event of this magnitude must have a heightened level of readiness to a physical or cyber security related event. “By the time a group like this makes a public announcement, much of the infiltration phase has already been done. These threat actors are smart and they don’t start to show their cards until they are well into the operational phase of their campaign,” he explained. Keanini said that events like the World Cup require hundreds of interconnected businesses and every one of those businesses need to be prepared. “If your business is connected to the Internet you should be prepared for cyber security events because it is likely to have already happened, you just don’t have the tools and technique to detect it,” he noted. Sean Power, security operations manager with DOSarrest, meanwhile, said that Anonymous is a face that any hacktivist can masquerade behind. “The composition of a team from one OP to the next will vary greatly – with a predictable effect on the sophistication of the attack. That being said, under normal operation any event as much in the public eye should be wary of DoS attacks, if threats have already been levied, that concern should be increased, not dismissed out of hand,” he explained. Ryan Dewhurst, a senior engineer and web security specialist with RandomStorm, told SCMagazineUK.com that Anonymous has already stated that they used targeted phishing emails to install malware on victim’s machines and gain access to government documents. “I believe they will use a mixture of both sophisticated and non-sophisticated attacks. However, they have also stated that they will be carrying out Distributed Denial of Service (DDoS) attacks against the World Cup sponsors,” he said. “Anonymous’ DDoS attacks, in the past, have worked by getting many Anonymous members to run software, most likely their infamous Low Orbit Ion Cannon (LOIC) tool, which attempts to flood their target with an overwhelming amount of traffic. The LOIC tool is most likely being run by the majority of the group members who have less technical skill, whereas the more sophisticated attacks are most likely carried out by the most skilled members of the group which would be fewer in number,” he added. Dewhurst says that Anonymous – if indeed it is this group and not another group of hacktivists using its name – are always going to go for the easiest targets, as these are also the least risky for them to attack, while still achieving their goals. “If their less risky methods are unsuccessful they will begin to increase the sophistication of the attack, however this also increases the risk of them eventually being caught,” he explained. David Howorth, Alert Logic’s vice president, say there are lessons that can be learned from Anonymous’ latest campaign, which means that companies should review their security practices assuming an attack could take place. IT security professionals, he advises, must be vigilant and ensure that all employees are aware of the company’s internal security policy and best practices, practice good password security, as well as making sure that all systems and applications are up-to-date and patched. “Make sure you have expertise that can monitor, correlate and analyse the security threats to your network and applications across your on-premise and cloud infrastructure 24×7 for continuous protection – this should be done now, as the hackers are already testing the vulnerabilities in the infrastructure in preparation for their attacks,” he went on to say. Source: http://www.scmagazineuk.com/anonymous-takes-aim-at-world-cup-sponsors/article/349934/

Read the article:
Anonymous takes aim at World Cup sponsors

WildStar early access period derailed by DDoS attacks

WildStar was set to launch for early buyers an hour ago, giving those folks a chance to jump into the game’s world days before everyone else. Unfortunately for those players (including our own Giant Robots In Disguise guild), WildStar is experiencing server issues and the developers are pointing the finger at a DDoS attack. WildStar executive producer Jeremy Gaffney posted on Reddit, “I’ve heard from a few folks it’s a confirmed DDOS attack (real time updates, may change, fog of war, etc.). Partially handled. Servers taking in some players now, player counts rising. Ninjitsu continues.” The best suggestion for now is to keep hammering away. The early bird period lasts all the way up to WildStar’s official release on June 3. Source: http://www.shacknews.com/article/84738/wildstar-early-access-period-derailed-by-ddos-attacks

Read the article:
WildStar early access period derailed by DDoS attacks

Repeat attacks hit two thirds of DDoS victims

Empirical research just published suggests that, whilst overall DDoS attack volumes are increasing steadily, new attack vectors are also constantly being used by cybercriminals. The analysis – entitled `NSFOCUS DDoS Threat Report 2013? – is based on more than 244,000 real-life distributed denial of service attacks observed at Tier 1 or Tier 2 ISPs by the research firm during the year. Researchers found that 79.8 percent of all attacks were 50 Mbps or less. In addition, although large size attacks get the most media attention, only 0.63 percent of all attack incidents were logged at 4 Gbps or more. Perhaps most interestingly of all is that more than 90 percent of the observed attacks lasted 30 minutes or less – and that 63.6 per cent of all targeted victims are attacked more than once. This figure is in line with earlier figures from Neustar whose second annual report, entitled `DDoS Attacks & Impact Report – 2014: The Danger Deepens’ – suggested  that once attacked, there is an estimated 69 percent chance of a repeat attack. Delving into the report reveals that HTTP_FLOOD, TCP_FLOOD and DNS_FLOOD are the top three attack types – contributing to more than 87 percent of all attacks. DNS_FLOOD attacks, however, significantly increased from 13.1 percent during the first half of the 2013 to 50.1 percent in the second half. So why the short duration attacks? The report suggests that, after analysing almost a quarter million DDoS incidents, a clear trend emerges, namely that that majority of DDoS attacks seen were short in duration, small in total attack size, and frequently repeating against the same target. “These short and frequently repeating attacks often serve two purposes: First, to scout their victims’ defence capabilities before more tailored assaults are launched, and second, to act as smokescreens or decoys for other exploitation,” says the report. The analysis adds that that many companies are using a combination of traditional counter-measures like scripts, tools and access control lists (ACLs) to handle network layer attacks – as well as on-premise DDoS mitigation systems for more prompt and effective mitigation against hybrid attacks (defined as a combination of network-layer and application-layer attacks). The most interesting takeout from the report,  SCMagazineUK.com  notes, is that the `old guard’ attack vectors – including the use of SNMP – remain an evolving constant. According to Sean Power, security operations manager with DOSarrest, amplification attacks – such as SNMP – are not really that new. “Legitimate SNMP traffic has no need to leave your network and should be prevented from doing so. This attack exists because many organisations fail to prevent this,” he explained. Power went on to say that the effectiveness of the attack stems from the fact that any Web site can be targeted and requires very little effort to produce excessive traffic, since it relies on third party unsecured networks to do most of the heavy lifting for the attack. “Blocking these attacks is best done via your edge devices as far removed from the targets as possible,” he said, adding that if the attack is large enough that it is overwhelming your edge devices, then you need to look at cloud-based technology for cleaning the traffic. Also commenting on the report, Tom Cross, director of security research for Lancope, said that many people who launch attacks on the Internet do so using toolkits that make the process of launching attacks as easy as installing a software application and running it. “DDoS attacks have become increasingly popular, there are many ways to launch them and lots of different tools circulating that launch attacks in different ways. As a consequence, anyone providing service on the Internet should be prepared for volumetric traffic floods involving any kind of Internet traffic,” he explained. Cross says that it is also important that people do not allow their networks to serve as reflectors that attackers can use to amplify their denial of service attacks. “To that end, DNS, SNMP, NTP, and Voice over IP services in particular should be checked to make sure that they cannot be used by an anonymous third party as a reflector. Locking down these services is part of being a good citizen of the Internet,” he said. Source: http://www.scmagazineuk.com/repeat-attacks-hit-two-thirds-of-ddos-victims/article/348960/

More:
Repeat attacks hit two thirds of DDoS victims

Detecting Constant Low-Frequency Appilication Layer Ddos Attacks Using Collaborative Algorithms

Abstract: — A DDoS (i.e., Distributed Denial of Service) attack is a large scale distributed attempt by malicious attackers to fill the users’ network with a massive number of packets. This exhausts resources like bandwidth, computing power, etc.; User can’t provide services to its clients and network performance get destroyed. The methods like hop count filtering; rate limiting and statistical filtering are used for recovery. In this paper, we explored two new information metrics which have generalized information about entropy metric and distance metric .They can detect low-rate of Distributed Denial of Service i.e., DDoS attacks by measuring difference  between the legitimate traffic and the attack traffic. The generalized entropy metric information can detect the attacks on several hops before than the traditional Shannon metric. The proposed information about the distance metric outperforms the popular Kullback–Leibler divergence approach as it has the ability to perfectly enlarge the adjudication distance and gets the optimal detection sensitivity. Further the IP trace back algorithm can find all attackers as well as their attacks through local area networks (LANs) and will delete the attack traffic. Index Terms— Attack detection, information metrics, IP trace back, low-rate distributed denial of service (DDoS) attack. I. INTRODUCTION Present in networking we have to provide security to information while accessing and transmitting. Lots of hacking tools are available for getting the information that was transmitted in the network. A standard security mechanism is in need to overcome this thing. The information in the network have to be out of range to intruders. It impacts bandwidth, processing capacity, or memory of a network. It has huge occupying nature on wired and wireless networks. DDoS attack is an intelligent attack and considered as low rate attack. The attacker is capable of sending multiple numbers of attack  packets to the user which is out bound to elude detection. Mostly combination of large-scale DDoS attacks and multiple Low-rate attacks are making user uncomfortable in the networking process. So it is becoming difficult to detect and getting solutions to such attacks. Nowadays, several Distributed Denial of Service attacking detection methods of metrics are in use, they are mainly separated into the following categories: i) the signature-based metric, and ii) anomaly-based metric. The signature-based method of metric depends on a technology that deploys a predefined set of attack-signatures like patterns or strings as signatures to match the incoming  packets. This anomaly-based detection method of metric typically models the normal network (traffic) behavior and  deploys it to compare the differences to incoming network  behavior. Anomaly-based method of detection has many limitations: i) Attackers can train detection systems to gradually accept anomaly network behavior as normal . ii) The rate at which the false positives use the anomaly- based detection metric is generally higher than those using the signature-based detection metric. It is difficult to set a threshold that helps us to balance the rate of false positives and the false negatives. iii) Precisely the extraction of the features like normal and anomalous network behaviors is very difficult. An anomaly- based detection method of metric uses a  predefined as well as specific threshold for example, an abnormal deviation of parameters related to some statistical characteristics that are considered from normal network traffic, to identify abnormal traffic amongst all normal traffic. Hence, it is important to utilize and to be decisive while choosing the statistical methods and tools respectively. It is an acceptable fact that the fractional Gaussian noise function and the Poisson distribution function can be used to simulate the can be used to simulate real network traffic in aggregation and the DDoS attack traffic in aggregation respectively. Many information theory based metrics have  been proposed to overcome the above limitations. In information theory, information entropy is a measure of the uncertainty associated with a random variable. Information distance (or divergence) is a measure of the difference  between different probability distributions. Shannon’s entropy and Kullback–Leibler’s divergence methods have  both been regarded as effective methods based on IP address-distribution statistics for detecting the abnormal traffic. Time taken for detection as well as detection accuracy of DDoS attacks are the two most important criteria for rating a defense system. Through this paper, we make you aware of two new and effective anomaly-based detection method of metrics that not only identify attacks quickly, but also they reduce the rate of false positives as compared to the traditional Shannon’s entropy method and the Kullback–Leibler divergence method. Contributions Some of the main contributions made in this paper are as follows: 1) It highlights the advantages and also it analyses the generalized entropy and information distance compared with Shannon entropy and Kullback–Leibler distance, respectively. 2) It proposes a better technique to the generalized entropy and information distance metrics to perform better than the traditional Shannon entropy and Kullback–Leibler distance method of metrics at low-rate DDoS attack detection in terms of quick detection, low rate of false positives and stabilities. 3) It proposes an effective IP trace back scheme that is based on an information distance method of metric that can trace all the attacks made by local area networks (LANs) and drive them back in a short time. ALGORITHMS FOR DETECTION AND IP TRACEBACK ANALYSIS In this section, we propose and analyze two effective detection algorithms and an IP traceback scheme. In this  paper, we make the following reasonable assumptions: 1) We will have full control of all the routers; 2) We will have extracted an effective feature of network traffic to sample its probability distribution; 3) We will have obtained and stored the average traffic of the normal, as well as the local thresholds and routers on their own in advance; 4) On all routers, the attack traffic obeys Poisson distribution and the normal traffic obeys Gaussian noise distribution. Our algorithm can not only detect DDoS attacks at router via single-point detection, but can also detect  the attacks that are made using a collaborative detection at routers. Fig. 2 shows the processing flowchart of the collaborative detection algorithm. Compared with single- point detection, we can detect attacks even before by using a collaborative detection approaches if the traffic can be analyzed before them. The divergence and distance are increasing simultaneously. By increasing the divergence  between legitimate traffic and attack traffic we can distinguish DDoS attacks easily and earlier. Therefore, in DDoS attack detection; we can take full advantage of the additive and increasing properties in of the information divergence and the information distance to enlarge the distance or gap between legitimate traffic and attack traffic. This means we can find and raise alarms for DDoS attacks quickly and accurately with a lower rate of false positives in upper stream routers instead of the victim’s router. In information theory, we know that both information divergence and information distance are nonnegative values and the sum of the divergences or distances is always greater C. IP Trace back Analysis IP trace back is the ability to find the source of an IP  packet without relying on the source IP field in the packet, which is often spoofed. We combine our DDoS attacks detection metric with IP trace back algorithm and filtering technology together to form an effective collaborative defense mechanism against network security threats in Internet. In hop-by-hop IP tracing, the more hops the more tracing processes, thus the longer time will be taken. Listing 1. A collaborative DDoS attack detection algorithm 1. Set the sampling frequency as f , the sampling as T, and the collaborative detection threshold as 0. 2. In routers R1 and R2 of Fig. 1, sampling the network tra ?ic comes from the upstream routers R3, R4 , R5, R6 and LAN1, LAN; in parallel. 3. Calculate in parallel the numbers of packet which have various recognizable characteristics (e.g., the source IP address or the packet’s size, etc.) in each sampling time interval ‘r(‘r = 1/ f) within T. 4. Calculate the probability distributions of the network tra ?ic come from R3, R4, LAN 1 and R5, R6, LAN2 in parallel. 5. Calculate their distances on router R1 and R2, respectively, using the formula Da(Ps Q) = Da(PllQ) + D¢-(Q||P)- 6. Sum the distances. 7. If the summed distance is more than the collaborative detection threshold 0, then the system detects the DDoS attack, and begins to raise an alarm and discards the attack packets; otherwise the routers forward the packets to the downstream routers. In order to convenience for IP trace back algorithm analysis, we classify two types of traffic in Figs. 1 and 3 as local traffic and forward traffic, respectively. The local traffic of is the traffic generated from its LAN, the forward traffic of is the sum of its local traffic and the traffic forwarded from its immediate upstream routers. In this paper, we propose an IP trace back algorithm that can trace the source (zombies) of the attack up to its local administrative network; Listing 2 illustrates this algorithm. Listing 2. An IP traceback algorithm in DDoS attacks detection The proposed IP trace back algorithm based on a sample scenario of low-rate DDoS attacks on a victim. When the  proposed attacks detection system detects an attack on a victim, the proposed IP traceback algorithm will be launched immediately. On router , the proposed traceback algorithm calculates information distances based on variations of its local traffic and the forward traffic from its immediate upstream routers; in this paper, we set LAN of router include the victim. If the information distance based on its local  traffic is more than the specific detection threshold, the  proposed detection system detects an attack in its LAN IP_Traceback_Algorithm () while(true) call Check_ForwardTraf ?c(0)//check attacks on router R0 (or victim) Check_ForwardTra ?ic (i) calculate infommtion distance D I-( R,-) i1°D:(Ri> > arm) call Check_LocalTra ?c for j = 1 to n k = the ID of the jth immediate upstream router of router Ri call Check_ForwardTra ?ic (Ic) end for end if I Check_LocalTra ?ic (xi) calculate infomlation distance D1,- if Du > 01¢ stop forwarding the attack tra ?c to downstream routers (or destination), label the zombie end if This means that the detected attack is an internal attack. If the information distances based on the forward traffic from its immediate upstream routers and are both more than the specific detection threshold and, respectively, the proposed detection system has detected attacks in routers and , then on and the proposed trace back algorithm calculates information distances based on variations of their local traffic and the forward traffic from their immediate upstream routers, and Will find that there are no attacks in LAN and LAN and ; therefore, on routers , and the proposed algorithm calculates continually information distances based on variations of their local traffic and the forward traffic from their immediate upstream routers, then can find there is an attack (zombie) in LAN so the router will stop forwarding the traffic from the zombie immediately. RELATED WORK The metrics of an anomaly-based detection have been the focusing on the intense study years together in an attempt to detect the intrusions and attacks done on the Internet. Recently, this information theory is being used as one of the statistical metrics that are being increasingly used for anomaly detection. Feinstein et al present methods to identify DDoS attacks by computing entropy and frequency-sorted of selected packet attributes. These Distributed Denial of Service attacks show their characteristics of the selected packet attributes to its anomalies, and its detection accuracy and performance can  be analyzed with the help of live traffic traces among a variety of network environments. However,  because of the proposed detector and responder there will  be a coordination lack with each other, then the impact of its responses on legitimate traffic and expenses for computational analysis may increase. Yu and Zhou applied a special technique for information theory parameter to discriminate the Distributed Denial of Service attack against the surge legitimate accessing. That technique is  based on the shared regularities along with different Distributed Denial of Service attack traffic, which differentiates it from real surging accessing over a short  period of time. However, the proposed detection algorithm will be helpful to us in predicting a single directions or a limited number of directions but the real problem comes when these attackers adopt a multiple attack package generation function in one attack to fool us. Lee and Xiang used various information-theoretic measures like entropy, conditional entropy, relative conditional entropy, information gain, and information cost for anomaly detection, etc. yes it is true that for some extent measures like mentioned above can be used to evaluate the quality of anomaly detection methods and to build the appropriate anomaly detection models but we find a tough time to  build an adaptive model that can dynamically adjust itself to different sequence lengths or time windows that are  based on run-time information. A low-rate Distributed Denial of Service attack is substantially different from a high-rate Distributed Denial of Service attack which is considered to be the traditional type of Distributed Denial of Service attack. A few number of researchers have proposed several detection schemes against Distributed Denial of Service type of attack. Sun et al. proposed a distributed detection mechanism that is used as a dynamic time warping method for identifying the presence of the low-rate attacks, then a fair resource for the allocation mechanism will be used to minimize the affected flows in number. However, this method can lose the legitimate traffic to some extent Shevtekar et al. gave a light-weight data structure to store the necessary flow history at edge routers to detect the low-rate TCP DoS attacks. Although this method can detect any  periodic pattern in the flows, it may not be scalable and can  be deceived by the IP address spoofing. Chen et al. Present a collaborative detection of DDoS attacks. While focusing on detection rate, it is difficult for this scheme to differentiate the normal flash crowds and real attacks. As it heavily relies on the normal operation of participating routers, the false  positives will increase if the routers are compromised. Zhang et al. propose to use self-similarity to detect low-rate DDoS attacks. While the approach is claimed to be effective, the  paper does not use real scenario data to evaluate it.Kullback– Leibler divergence, as a well-known information divergence, has been used by researchers to detect abnormal traffic such as DDoS attacks. The difference between previous work and our research is that we are the first to propose using information divergence for DDoS attack detection. Information divergence, as the generalized divergence, can deduce many concrete divergence forms according to different values of order. For example, when, it can decipher the Kullback–Leibler divergence. It is very important and significant that we can obtain the optimal value of divergence between the attack traffic and the legitimate traffic in a DDoS detection system  by adjusting the value of order of information n divergence. In addition to this, we also study the properties of Kullback– Leibler divergence and information divergence in theory and overcome their asymmetric property when used in real measurement. We successfully convert the information divergence into an effective metric in DDoS attack (including both low-rate and high-rate) detection. V. CONCLUSION In this paper we described different techniques which are for the prevention of the denial of service attacks. A new methodology along with the existing packet marking technique was proposed. The information contains the lifetime of the packet. The traceback process an accurate one. As the proposed metrics can increase the information distance among attack traffic and legitimate traffic. Those lead to detect low-rate DDoS attacks fast and reduce the false positive rate accurately. This information distance metric overcomes the properties of asymmetric of both Kullback-Leibler and information divergences. IP traceback scheme based on information metrics can effectively trace all attacks including LANs (zombies). Our  proposed information metrics improve the performance of low-rate DDoS attacks detection and IP traceback over the traditional approaches. Source: http://www.scribd.com/doc/226717154/Detecting-Constant-Low-Frequency-Appilication-Layer-Ddos-Attacks-Using-Collaborative-Algorithms

Taken from:
Detecting Constant Low-Frequency Appilication Layer Ddos Attacks Using Collaborative Algorithms

Tens of thousands of pirate gamers enslaved by Bitcoin botnet

‘Watch Dogs’ players targeted for access to their juicy GPUs Tens of thousands of pirate gamers have been enslaved in a BitCoin botnet after downloading a leaked copy of popular game Watch Dogs.…

See more here:
Tens of thousands of pirate gamers enslaved by Bitcoin botnet

Tens of thousands of Watch Dogs pirates ENSLAVED by Bitcoin botmaster

Watch Dogs players targeted for access to their juicy GPUs Tens of thousands of pirate gamers have been enslaved in a Bitcoin botnet after downloading a cracked copy of popular game Watch Dogs .…

More:
Tens of thousands of Watch Dogs pirates ENSLAVED by Bitcoin botmaster

Tens of thousands of ‘Watch Dogs’ pirates gamers ENSLAVED by Bitcoin botmaster

Watch Dogs fans targeted for access to their juicy GPUs Tens of thousands of pirate gamers have been enslaved in a Bitcoin botnet after downloading a cracked copy of popular game Watch Dogs .…

View article:
Tens of thousands of ‘Watch Dogs’ pirates gamers ENSLAVED by Bitcoin botmaster