Tag Archives: defend against ddos

Community college targeted ongoing DDoS attack

Walla Walla Community College is under cyberattack this week by what are believed to be foreign computers that have jammed the college’s Internet systems. Bill Storms, technology director, described it as akin to having too many cars on a freeway, causing delays and disruption to those wanting to connect to the college’s website. The type of attack is a distributed denial of service, or DDoS. They’re often the result of hundreds or even thousands of computers outside the U.S. that are programed with viruses that continually connect to and overload targeted servers. Storms said bandwidth monitors noticed the first spike of attacks on Sunday. To stop the attacks, college officials have had to periodically shut down the Web connection while providing alternative working Internet links to students and staff. The fix, so far, has only been temporary as the problem often returns the next day. “We think we have it under control in the afternoon. And we have a quiet period,” Storm said. “And then around 9 a.m. it all comes in again.” Walla Walla Community College may not be the only victim of the DDoS attack. Storm said he was informed that as many as 39 other state agencies have been the target of similar DDoS attacks. As for the reason for the attack, none was given to college officials. Storms noted campus operators did receive a number of unusual phone calls where the callers said that they were in control of the Internet. But no demands were made. “Some bizarre phone calls came in, and I don’t know whether to take them serious or not,” Storms said. State officials have been contacted and are aiding the college with the problem. Storms said they have idea how long the DDoS attack will last. Source: http://union-bulletin.com/news/2015/apr/30/community-college-targeted-ongoing-cyberattack/

Continued here:
Community college targeted ongoing DDoS attack

FBI investigating Rutgers University in DDoS attack

The FBI is working with Rutgers University to identify the source of a series of distributed denial-of-service (DDoS) attacks that have plagued the school this week. The assault began Monday morning and took down internet service across the campus according to NJ.com. Some professors had to cancel classes and students were unable to enroll, submit assignments or take finals since Wi-fi service and email have been affected as has an online resource called Sakai. This is the second DDoS attack on the university this month and the third since November. Authorities and the Rutgers Office of Information and Technology (OIT) haven’t released any details thus far about the possible source of the attacks. Currently, only certain parts of the university have internet service. The school will make frequent updates on to the Rutgers website about its progress in restoring service. Source: http://www.scmagazine.com/the-fbi-is-helpign-rutger-inveigate-a-series-of-ddos-attack/article/412149/

See the original post:
FBI investigating Rutgers University in DDoS attack

One fifth of DDoS attacks last over a day

Some 20 per cent of DDoS attacks have lasting damage that can see them taking a site down for 24 hours or more, according to research by Kaspersky. In fact, almost a tenth of the companies surveyed said their systems were down for several weeks or longer, while less than a third said they had disruption lasting less than an hour. The investigation revealed that the majority of attacks (65 per cent) caused severe delays or complete disruption, while only a third caused no disruption at all. Evgeny Vigovsky, head of Kaspersky DDoS Protection, said: “For companies, losing a service completely for a short time, or suffering constant delays in accessing it over several days, can be equally serious problems. “Both situations can impact customer satisfaction and their willingness to use the same service in the future. Using reliable security solutions to protect against DDoS attacks enables companies to give their customers uninterrupted access to online services, regardless of whether they are facing a powerful short-term assault or a weaker but persistent long-running campaign.” The company highlighted an attack on Github at the end of March when Chinese hackers brought the site down. That attack lasted 118 hours and demonstrated that even large communities are at risk. Last month, another study by Kaspersky revealed that only 37 per cent of companies were prepared for a DDoS attack, despite 26 per cent of them being concerned the problems caused by such attacks were long-term, meaning they could lose current or prospective clients as a result. Source: http://www.itpro.co.uk/security/24514/one-fifth-of-ddos-attacks-last-over-a-day

More:
One fifth of DDoS attacks last over a day

Featured article: How to use a CDN properly and make your website faster

Its one of the biggest mysteries to me I have seen in my 15+ years of Internet hosting and cloud based services. The mystery is, why do people use a Content Delivery Network for their website yet never fully optimize their site to take advantage of the speed and volume capabilities of the CDN. Just because you use a CDN doesn’t mean your site is automatically faster or even able to take advantage of its ability to dish out mass amounts of content in the blink of an eye. At DOSarrest I have seen the same mystery continue, this is why I have put together this piece on using a CDN and hopefully help those who wish to take full advantage of a CDN. Most of this information is general and can be applied to using any CDN but I’ll also throw in some specifics that relate to DOSarrest. Some common misconceptions about using a CDN As soon as I’m configured to use a CDN my site will be faster and be able to handle a large amount of web visitors on demand. Website developers create websites that are already optimized and a CDN won’t really change much. There’s really nothing I can do to make my website run faster once its on a CDN. All CDN’s are pretty much the same. Here’s what I have to say about the misconceptions noted above In most cases the answer to this is…. NO !! If the CDN is not caching your content your site won’t be faster, in fact it will probably be a little slower, as every request will have to go from the visitor to the CDN which will in turn go and fetch it from your server then turn around and send the response back to the visitor. In my opinion and experience website developers in general do not optimize websites to use a CDN. In fact most websites don’t even take full advantage of a browsers’ caching capability. As the Internet has become ubiquitously faster, this fine art has been left by the wayside in most cases. Another reason I think this has happened is that websites are huge, complex and a lot of content is dynamically generated coupled with very fast servers with large amounts of memory. Why spend time on optimizing caching, when a fast server will overcome this overhead. Oh yes you can and that’s why I have written this piece…see below No they aren’t. Many CDN’s don’t want you know how things are really working from every node that they are broadcasting your content from. You have to go out and subscribe to a third party service, if you have to get a third party service, do it, it can be fairly expensive but well worth it. How else will you know how your site is performing from other geographic regions. A good CDN should let you know the following in real-time but many don’t. Number of connections/requests between the CDN and Visitors. Number of connections/requests between the CDN and your server (origin). You want try and have the number of requests to your server to be less than the number of requests from the CDN to your visitors. *Tip- Use HTTP 1.1 on both “a” & “b” above and try and extend the keep-alive time on the origin to CDN side Bandwidth between the CDN and Internet visitors Bandwidth between the CDN and your server (origin) *Tip – If bandwidth of “c” and “d” are about the same, news flash…You can make things better. Cache status of your content (how many requests are being served by the CDN) *Tip – This is the best metric to really know if you are using your CDN properly. Performance metrics from outside of the CDN but in the same geographic region *Tip- Once you have the performance metrics from several different geographic regions you can compare the differences once you are on a CDN, your site should load faster the further away the region is located from your origin server, if you’re caching properly. For the record DOSarrest provides all of the above in real-time and it’s these tools I’ll use to explain on how to take full advantage of any CDN but without any metrics there’s no scientific way to know you’re on the right track to making your site super fast. There are five main groups of cache control tags that will effect how and what is cached. Expires : When attempting to retrieve a resource a browser will usually check to see if it already has a copy available for reuse. If the expires date has past the browser will download the resource again. Cache-control : HTTP 1.1 this expands on the functionality offered by Expires. There are several options available for the cache control header: – Public : This resource is cacheable. In the absence of any contradicting directive this is assumed. – Private : This resource is cachable by the end user only. All intermediate caching devices will treat this resource as no-cache. – No-cache : Do not cache this resource. – No-store : Do not cache, Do not store the request, I was never here – we never spoke. Capiche? – Must-revalidate : Do not use stale copies of this resource. – Proxy-revalidate : The end user may use stale copies, but intermediate caches must revalidate. – Max-age : The length of time (in seconds) before a resource is considered stale. A response may include any combination of these headers, for example: private, max-age=3600, must-revalidate. X-Accel-Expires : This functions just like the Expires header, but is only intended for proxy services. This header is intended to be ignored by browsers, and when the response traverses a proxy this header should be stripped out. Set-Cookie : While not explicitly specifying a cache directive, cookies are generally designed to hold user and/or session specific information. Caching such resources would have a negative impact on the desired site functionality. Vary : Lists the headers that should determine distinct copies of the resource. Cache will need to keep a separate copy of this resource for each distinct set of values in the headers indicated by Vary. A Vary response of “ * “ indicates that each request is unique. Given that most websites in my opinion are not fully taking advantage of caching by a browser or a CDN, if you’re using one, there is still a way around this without reviewing and adjusting every cache control header on your website. Any CDN worth its cost as well as any cloud based DDoS protection services company should be able to override most website cache-control headers. For demonstration purposes we used our own live website DOSarrest.com and ran a traffic generator so as to stress the server a little along with our regular visitor traffic. This demonstration shows what’s going on, when passing through a CDN with respect to activity between the CDN and the Internet visitor and the CDN and the customers server on the back-end. At approximately 16:30 we enabled a feature on DOSarrest’s service we call “Forced Caching” What this does is override in other words ignore some of the origin servers cache control headers. These are the results: Notice that bandwidth between the CDN and the origin (second graph) have fallen by over 90%, this saves resources on the origin server and makes things faster for the visitor. This is the best graphic illustration to let you know that you’re on the right track. Cache hits go way up, not cached go down and Expired and misses are negligible. The graph below shows that the requests to the origin have dropped by 90% ,its telling you the CDN is doing the heavy lifting. Last but not least this is the fruit of your labor as seen by 8 sensors in 4 geographic regions from our Customer “ DEMS “ portal. The site is running 10 times faster in every location even under load !

Follow this link:
Featured article: How to use a CDN properly and make your website faster

Thirty Meter Telescope website falls over in hacktivist DDoS attack

Hacktivists have launched a distributed denial-of-service attack against the website of TMT (Thirty Meter Telescope), which is planned to be the Northern hemisphere’s largest, most advanced optical telescope. For at least two hours yesterday, the TMT website at www.tmt.org was inaccessible to internet users. Sandra Dawson, a spokesperson for the TMT project, confirmed to the Associated Press that the site had come under attack: “TMT today was the victim of an unscrupulous denial of service attack, apparently launched by Anonymous. The incident is being investigated.” You might think that a website about a telescope is a strange target for hackers wielding the blunt weapon of a DDoS attack, who might typically be more interested in attacking government websites for political reasons or taking down an unpopular multinational corporation. Why would hackers want to launch such a disruptive attack against a telescope website? Surely the only people who don’t like telescopes are the aliens in outer space who might be having their laundry peeped at from Earth? It turns out there’s a simple reason why the Thirty Meter Telescope is stirring emotions so strongly: it hasn’t been built yet. The construction of the proposed TMT is controversial because it is planned to be be constructed on Mauna Kea, a dormant 13,796 foot-high volcano in Hawaii. This has incurred the wrath of environmentalists and native Hawaiians who consider the land to be sacred. There has been considerable opposition to the building of the telescope on Mauna Kea, as this news report from last year makes clear. Now it appears the protest about TMT has spilt over onto the internet in the form of a denial-of-service attack. Operation Green Rights, an Anonymous-affiliated group which also campaigns against controversial corporations such as Monsanto, claimed on its Twitter account and website that it was responsible for the DDoS attack. The hacktivists additionally claimed credit for taking down Aloha State’s official website. It is clear that denial-of-service attacks are being deployed more and more, as perpetrators attempt to use the anonymity of the internet to hide their identity and stage the digital version of a “sit down protest” or blockade to disrupt organisations. Tempting as it may be to participate in a DDoS attack, it’s important that everyone remembers that if the authorities determine you were involved you can end up going to jail as a result. Peaceful, law-abiding protests are always preferable. Source: http://www.welivesecurity.com/2015/04/27/tmt-website-ddos/

Continue Reading:
Thirty Meter Telescope website falls over in hacktivist DDoS attack

DDoS attack brings down TRAI’s website

Indian telecom regulator TRAI’s official website was on Monday brought down by a hacker group called Anonymous India following the public release of email IDs from which the government body received responses regarding net neutrality. The group also warned TRAI of being hacked soon. “TRAI down! Fuck you http://trai.gov.in  for releasing email IDs publicly and helping spammers. You   will be hacked soon,” AnonOpsIndia tweeted. The group claimed to launch a DDoS (distributed denial-of-service) attack on the website to make it inaccessible. Slamming the government portal, the group posted: “#TRAI is so incompetent lol They have any clue how to tackle a DDoS?” “But just an alarm for whole #India. You trust incompetent #TRAI who don’t know how to deal with DDoS? Seriously sorry guys. Goodluck!,” it added. Taking a dig at the personnel at TRAI, it tweeted: “Somebody call ‘brilliant minds’ at TRAI and tell them to stop eating samosas and get back to work coz DDoS attack has stopped from here.” In a response to a Twitter user about the attack, Anonymous India said it was “just preventing spammers from accessing those Email IDs posted by Trai publicly.” It said that TRAI is incompetent in dealing with internet. “So those who still think that #TRAi can “handle” the Internet, we just proved you wrong.They just got trolled by bunch of kids.#Incompetence,” the hacker group tweeted. Following tweets suggesting the hacker group to stop their actions, Anonymous India did same. However, the group compalined that no action was taken on those email ids which were revealed. “Guys http://trai.gov.in  is back online and they still haven’t done anything about those Email IDs. You guys told us to stop. We did,” it tweeted. “So if you guys still think you can have a chat with incompetent #TRAi, go ahead. But WE ARE WATCHING!,” the group posted. Source: http://indiablooms.com/ibns_new/news-details/N/10099/hacker-group-brings-down-trai-s-website.html

View article:
DDoS attack brings down TRAI’s website

A Javascript-based DDoS Attack as seen by Safe Browsing

To protect users from malicious content, Safe Browsing’s infrastructure analyzes web pages with web browsers running in virtual machines. This allows us to determine if a page contains malicious content, such as Javascript meant to exploit user machines. While machine learning algorithms select which web pages to inspect, we analyze millions of web pages every day and achieve good coverage of the web in general. In the middle of March, several sources reported a large Distributed Denial-of-Service attack against the censorship monitoring organization GreatFire. Researchers have extensively analyzed this DoS attack and found it novel because it was conducted by a network operator that intercepted benign web content to inject malicious Javascript. In this particular case, Javascript and HTML resources hosted on baidu.com were replaced with Javascript that would repeatedly request resources from the attacked domains. While Safe Browsing does not observe traffic at the network level, it affords good visibility at the HTTP protocol level. As such our infrastructure picked up this attack, too. Using Safe Browsing data, we can provide a more complete timeline of the attack and shed light on what injections occurred when. For this blog post, we analyzed data from March 1st to April 15th 2015. Safe Browsing first noticed injected content against baidu.com domains on March 3rd, 2015. The last time we observed injections during our measurement period was on April 7th, 2015. This is visible in the graph below, which plots the number of injections over time as a percentage of all injections observed: We noticed that the attack was carried out in multiple phases. The first phase appeared to be a testing stage and was conducted from March 3rd to March 6th. The initial test target was 114.113.156.119:56789 and the number of requests was artificially limited. From March 4rd to March 6th, the request limitations were removed. The next phase was conducted between March 10th and 13th and targeted the following IP address at first: 203.90.242.126. Passive DNS places hosts under the sinajs.cn domain at this IP address. On March 13th, the attack was extended to include d1gztyvw1gvkdq.cloudfront.net. At first, requests were made over HTTP and then upgraded to to use HTTPS. On March 14th, the attack started for real and targeted d3rkfw22xppori.cloudfront.net both via HTTP as well as HTTPS. Attacks against this specific host were carried out until March 17th. On March 18th, the number of hosts under attack was increased to include the following: d117ucqx7my6vj.cloudfront.net, d14qqseh1jha6e.cloudfront.net, d18yee9du95yb4.cloudfront.net, d19r410x06nzy6.cloudfront.net, d1blw6ybvy6vm2.cloudfront.net. This is also the first time we find truncated injections in which the Javascript is cut-off and non functional. At some point during this phase of the attack, the cloudfront hosts started serving 302 redirects to greatfire.org as well as other domains. Substitution of Javascript ceased completely on March 20th but injections into HTML pages continued. Whereas Javascript replacement breaks the functionality of the original content, injection into HTML does not. Here HTML is modified to include both a reference to the original content as well as the attack Javascript as shown below: [… regular attack Javascript …] In this technique, the web browser fetches the same HTML page twice but due to the presence of the query parameter t, no injection happens on the second request. The attacked domains also changed and now consisted of: dyzem5oho3umy.cloudfront.net, d25wg9b8djob8m.cloudfront.net and d28d0hakfq6b4n.cloudfront.net. About 10 hours after this new phase started, we see 302 redirects to a different domain served from the targeted servers. The attack against the cloudfront hosts stops on March 25th. Instead, resources hosted on github.com were now under attack. The first new target was github.com/greatfire/wiki/wiki/nyt/ and was quickly followed by github.com/greatfire/ as well as github.com/greatfire/wiki/wiki/dw/. On March 26th, a packed and obfuscated attack Javascript replaced the plain version and started targeting the following resources: github.com/greatfire/ and github.com/cn-nytimes/. Here we also observed some truncated injections. The attack against github seems to have stopped on April 7th, 2015 and marks the last time we saw injections during our measurement period. From the beginning of March until the attacks stopped in April, we saw 19 unique Javascript replacement payloads as represented by their MD5 sum in the pie chart below. For the HTML injections, the payloads were unique due to the injected URL so we are not showing their respective MD5 sums. However, the injected Javascript was very similar to the payloads referenced above. Our systems saw injected content on the following eight baidu.com domains and corresponding IP addresses: cbjs.baidu.com (123.125.65.120) eclick.baidu.com (123.125.115.164) hm.baidu.com (61.135.185.140) pos.baidu.com (115.239.210.141) cpro.baidu.com (115.239.211.17) bdimg.share.baidu.com (211.90.25.48) pan.baidu.com (180.149.132.99) wapbaike.baidu.com (123.125.114.15) The sizes of the injected Javascript payloads ranged from 995 to 1325 bytes. We hope this report helps to round out the overall facts known about this attack. It also demonstrates that collectively there is a lot of visibility into what happens on the web. At the HTTP level seen by Safe Browsing, we cannot confidently attribute this attack to anyone. However, it makes it clear that hiding such attacks from detailed analysis after the fact is difficult. Had the entire web already moved to encrypted traffic via TLS, such an injection attack would not have been possible. This provides further motivation for transitioning the web to encrypted and integrity-protected communication. Unfortunately, defending against such an attack is not easy for website operators. In this case, the attack Javascript requests web resources sequentially and slowing down responses might have helped with reducing the overall attack traffic. Another hope is that the external visibility of this attack will serve as a deterrent in the future. Source: http://googleonlinesecurity.blogspot.ca/2015/04/a-javascript-based-ddos-attack-as-seen.html

Originally posted here:
A Javascript-based DDoS Attack as seen by Safe Browsing

Banks Lose Up to $100K/Hour to Shorter, More Intense DDoS Attacks

Distributed denial of service attacks have morphed from a nuisance to something more sinister. In a DDoS attack, heavy volumes of traffic are hurled at a website to halt normal activity or inflict damage, typically freezing up the site for several hours. Such exploits achieved notoriety in the fall of 2012 when large banks were hit by a cyberterrorist group. But the Operation Ababil attacks were simply meant to stop banks’ websites from functioning. They caused a great deal of consternation among bank customers and the press, but little serious harm. Since then, the attacks have become more nuanced and targeted, several recent reports show. “DDoS is a growing problem, the types of attack are getting more sophisticated, and the market is attracting new entrants,” said Rik Turner, a senior analyst at Ovum, a research and consulting firm. For example, “we’re seeing lots of small attacks with intervals that allow the attackers to determine how efficiently the victims’ mitigation infrastructure is and how quickly it is kicking in,” he said. This goes for banks as much as for nonbanking entities. Verisign’s report on DDoS attacks carried out in the fourth quarter of 2014 found that the number of attacks against the financial industry doubled to account for 15% of all offensives. DDoS activity historically increases during the holiday season each year. “Cybercriminals typically target financial institutions during the fourth quarter because it’s a peak revenue and customer interaction season,” said Ramakant Pandrangi, vice president of technology at Verisign. “As hackers have become more aware of this, we anticipate the financial industry will continue to see an increase in the number of DDoS activity during the holiday season year over year.” In a related trend, bank victims are getting hit repeatedly. “If you have an organization that’s getting hit multiple times, often that’s an indicator of a very targeted attack,” said Margee Abrams, director of security services at Neustar, an information services company. According to a report Neustar commissioned and released this week, in the financial services industry, 43% of bank targets were hit more than six times during 2014. Neustar worked with a survey sampling company that gathered responses from 510 IT directors in the financial services, retail and IT services, with strong representation in financial services. (The respondents are not Neustar customers.) The average bandwidth consumed by a DDoS attack increased to 7.39 gigabits per second, according to Verisign’s analysis of DDoS attacks in the fourth quarter of 2014. This is a 245% increase from the last quarter of 2013 and it’s larger than the incoming bandwidth most small and medium-sized businesses, such as community banks, can provision. At the same time, DDoS attacks are shorter, as banks have gotten relatively adept at handling them. Most (88%) detect attacks in less than two hours (versus 77% for companies in general), according to Neustar’s research. And 72% of banks respond to attacks in that timeframe. Some recent DDoS attacks on banks have been politically motivated. Last year, a hacker group called the European Cyber Army claimed responsibility for DDoS attacks against websites run by Bank of America, JPMorgan Chase, and Fidelity Bank. Little is known about the group, but it has aligned itself with Anonymous on some attacks and seems interested in undermining U.S. institutions, including the court system as well as large banks. But while attacks from nation-states and hacktivists tend to grab headlines, it’s the stealthy, unannounced DDoS attacks, such as those against Web applications, that are more likely to gum up the works for bank websites for short periods and are in fact more numerous, Turner noted. They’re meant to test the strength of defenses or to distract the target from another type of attack. For example, a DDoS attack may be used as smokescreen for online banking fraud or some other type of financially motivated fraud. In Neustar’s study, 30% of U.S. financial services industry respondents said they suffered malware or virus installation and theft as a result of a DDoS attack. “What I hear from our clients is that DDoS is sometimes used as a method to divert security staff so that financial fraud can get through,” said Avivah Litan, vice president at Gartner. “But these occurrences seem to be infrequent.” Her colleague Lawrence Orans, a research vice president for network security at Gartner, sounded skeptical about the frequency of DDoS-as-decoy schemes. “I think there is some fear-mongering associated with linking DDoS attacks with bank fraud,” he said. However, “the FBI has issued warnings about this in the past, so there is some validity to the issue of attackers using DDoS attacks as a smokescreen to distract a bank’s security team while the attacker executes fraudulent transactions.” According to Verisign’s iDefense team, DDoS cybercriminals are also stepping up their attacks on point-of-sale systems and ATMs. “We believe this trend will continue throughout 2015 for financial institutions,” Pandrangi said. “Additionally, using an outdated operating system invites malware developers and other cyber-criminals to exploit an organization’s networks. What’s worse is that thousands of ATMs owned by the financial sector in the U.S. are running on the outdated Windows XP operating system, making it vulnerable to becoming compromised.” Six-Figure Price Tag DDoS attacks are unwelcome at any cost. Neustar’s study puts a price tag on the harm banks suffer during such attacks: $100,000 an hour for most banks that were able to quantify it. More than a third of the financial services firms surveyed reported costs of more than that. “Those losses represent what companies stand to lose during peak hours of transactions on their websites,” said Abrams. “That doesn’t even begin to cover the losses in terms of expenses going out. For example, many attacks require six to ten professionals to mitigate the attack once it’s under way. That’s a lot of salaries going out that also represent losses for the company.” Survey respondents also complained about the damage to their brand and customer trust during and after DDoS attacks. “That gets more difficult to quantify in terms of losses to an overall brand, but it’s a significant concern,” Abrams said. To some, the $100,000 figure seems high. “Banks have other channels for their customers — mainly branch, ATM and phone — so I don’t see that much revenue being lost,” said Litan. Other recent studies have also attempted to quantify the cost of a DDoS attack. A study commissioned by Incapsula surveyed IT managers from 270 North American organizations and found that the average cost of an attack was $40,000 an hour: 15% of respondents put the cost at under $5,000 an hour; 15% said it was more than $100,000. There’s no question banks have had to spend millions in aggregate to mitigate DDoS risks. “They created more headroom by buying more bandwidth and by scaling the capacity of their web infrastructure — for example, by buying more powerful web servers,” said Orans. “And they continue to spend millions on DDoS mitigation services. That’s where the real pain has been — the attackers forced the banks to spend a lot of money on DDoS mitigation.” Source: http://www.americanbanker.com/news/bank-technology/banks-lose-up-to-100khour-to-shorter-more-intense-ddos-attacks-1073966-1.html?zkPrintable=1&nopagination=1

Taken from:
Banks Lose Up to $100K/Hour to Shorter, More Intense DDoS Attacks

Mexican news site suffers DDoS Attack after publishing article on State Massacre

After publishing the article — titled “It Was The Feds” — news portal Aristegui Noticias reported suffering distributed denial of service (DDoS) attacks, which brought the site down for more than seven hours. Press freedom group Article 19 immediately called on authorities to guarantee the free flow of information. Additionally, the group called on the Mexican government to act in defense of journalists, “especially when they are providing vital information to the public as is in the case of Laura Castellanos.” Castellanos, the investigative reporter behind the article, has been the victim of intimidation, break-ins, and security threats over her decades-long career. In 2010, Article 19 included Castellanos in their journalist protection program. Mexico’s human rights commission called on the government to conduct a thorough investigation to “get to the truth” of the Apatzingán incident. “We want to let society know what happened that day,” human rights commission ombudsman Luis Raúl González Pérez said Tuesday. Source: https://news.vice.com/article/mexicos-government-is-brushing-off-report-of-another-state-massacre-of-unarmed-civilians    

More here:
Mexican news site suffers DDoS Attack after publishing article on State Massacre

The rise and rise of bad bots – little DDoS

Many will be familiar with the term bot, short for web-robot. Bots are essential for effective operation of the web: web-crawlers are a type of bot, automatically trawling sites looking for updates and making sure search engines know about new content. To this end, web site owners need to allow access to bots, but they can (and should) lay down rules. The standard here is to have a file associated with any web server called robots.txt that the owners of good bots should read and adhere too. However, not all bots are good; bad bots can just ignore the rules! Most will also have heard of botnets, arrays of compromised users devices and/or servers that have illicit background tasks running to send spam or generate high volumes of traffic that can bring web servers to their knees through DDoS (distributed denial of service) attacks. A Quocirca research report, Online Domain Maturity, published in 2014 and sponsored by Neustar (a provider of DDoS mitigation and web site protection/performance services), shows that the majority of organisations say they have either permanent or emergency DDoS protection in place, especially if they rely on websites to interact with consumers. However, Neustar’s own March 2015, EMEA DDoS Attacks and Protection Report, shows that in many cases organisations are still relying on intrusion prevention systems (IPS) or firewalls rather than custom DDoS protection. The report, which is based on interviews with 250 IT managers, shows that 7-10% of organisations believe they are being attacked at least once a week. Other research suggests the situation may actually be much worse than this, but IT managers are simply not aware of it. Corero (another DDoS protection vendor) shows in its Q4 2014 DDoS Trends and Analysis report, which uses actual data regarding observed attacks, that 73% last less than 5 minutes. Corero says these are specifically designed to be short lived and go unnoticed. This is a fine tuning of the so-called distraction attack. Arbor (yet another DDoS protection vendor) finds distraction to be the motivation for about 19-20% of attacks in its 2014 Worldwide Infrastructure Security Report. However, as with Neustar, this is based on what IT managers know, not what they do not know. The low level, sub-saturation, DDoS attacks, reported by Corero are designed to go unnoticed but disrupt IPS and firewalls for just long enough to perpetrate a more insidious targeted attack before anything has been noticed. Typically it takes an IT security team many minutes to observe and respond to a DDoS attack, especially if they are relying on an IPS. That might sound fast, but in network time it is eons; attackers can easily insert their actual attack during the short minutes of the distraction. So there is plenty of reason to put DDoS protection in place (other vendors include Akamai/Prolexic, Radware and DOSarrest ). However, that is not the end of the bot story. Cyber-criminals are increasingly using bots to perpetrate another whole series of attacks. This story starts with another, sometimes, legitimate and positive activity of bots – web scraping; the subject of a follow on blog – The rise and rise of bad bots – part 2 – beyond web scraping. Source: http://www.computerweekly.com/blogs/quocirca-insights/2015/04/the-rise-and-rise-of-bad-bots.html

Continued here:
The rise and rise of bad bots – little DDoS