Tag Archives: latest

E-toll site weathers denial of service (DDoS) attack

Sanral’s e-toll Web site suffered a denial of service (DoS) attack on Friday, according to the agency. “Some users complained of slow site performance, and our service provider traced the problem to a denial of service attack of international origin,” said Sanral spokesman Vusi Mona. No further details of the attack were available, but Alex van Niekerk, project manager for the Gauteng Freeway Improvement Project, said the site has come under repeated attack since going live, but suffered only minor performance degradation. DoS attacks, particularly distributed denial of service (DDoS) attacks, are a popular technique used to knock sites offline, overwhelming them with traffic until they are unable to service their clients. Activist group Anonymous frequently uses DDoS to attack targets, using its wide base of supporters to generate traffic. Botnets often launch DDoS attacks from their installed base of zombie PCs. And last year, anti-spam service Spamhaus suffered one of the largest DDoS attacks in history, with incoming traffic peaking at 300Gbps, launched by a Dutch Web host known for harbouring spammers. Sanral’s Web site has been the target of several attacks lately, including a hack which may have leaked personal information, a flaw which allowed motorists to be tracked in real-time, and a session fixation attack which allowed login sessions to be hijacked. Source: http://www.itweb.co.za/index.php?option=com_content&view=article&id=70192:e-toll-site-weathers-denial-of-service-attack

See more here:
E-toll site weathers denial of service (DDoS) attack

DDoS attacks get more complex – are networks prepared?

The threat of cyber attacks from both external and internal sources is growing daily. A denial of service, or DoS, attack is one of the most common. DoS have plagued defense, civilian and commercial networks over the years, but the way they are carried out is growing in complexity. If you thought your systems were engineered to defend against a DoS attack, you may want to take another look.   Denial of service attack evolution A denial of service attack is a battle for computing resources between legitimate requests that a network and application infrastructure were designed for and illegitimate requests coming in solely to hinder the service provided or shut down the service altogether.   The first DoS attacks were primarily aimed at Layer 3 or Layer 4 of the OSI model and were designed to consume all available bandwidth, crash the system being attacked, or consume all of the available memory, connections or processing power. Some examples of these types of attacks are the Ping of Death, Teardrop, SYN flood and ICMP flood. As operating system developers, hardware vendors and network architects began to mitigate these attacks, attackers have had to adapt and discover new methods. This has led to an increase in complexity and diversity in the attacks that have been used.   Since DoS attacks require a high volume of traffic — typically more than a single machine can generate — attackers may use a botnet, which is a network of computers that are under the control of the attacker. These devices are likely to have been subverted through malicious means. This type of DoS, called a distributed denial of service (DDoS), is harder to defend against because the traffic likely will be coming from many directions.   While the goal of newer DoS attacks is the same as older attacks, the newer attacks are much more likely to be an application layer attack launched against higher level protocols such as HTTP or the Domain Name System. Application layer attacks are a natural progression for several reasons: 1) lower level attacks were well known and system architects knew how to defend against them; 2) few mechanisms, if any, were available to defend against these types of attacks; and 3) data at a higher layer is much more expensive to process, thus utilizing more computing resources.   As attacks go up the OSI stack and deeper into the application, they generally become harder to detect. This equates to these attacks being more expensive, in terms of computing resources, to defend against. If the attack is more expensive to defend against, it is more likely to cause a denial of service. More recently, attackers have been combining several DDoS attack types. For instance, an L3/L4 attack, in combination with an application layer attack, is referred to as diverse distributed denial of service or 3DoS. Internet and bandwidth growth impact DoS   Back in the mid- to late 1990s, fewer computers existed on the Internet. Connections to the Internet and other networks were smaller and not much existed in the way of security awareness. Attackers generally had less bandwidth to the Internet, but so did organizations.   Fast forward to the present and it’s not uncommon for a home connection to have 100 megabits per second of available bandwidth to the Internet. These faster connections give attackers the ability to send more data during an attack from a single device. The Internet has also become more sensitive to privacy and security, which has lead to encryption technologies such as Secure Sockets Layer/Transport Layer Security to encrypt data transmitted across a network. While the data can be transported with confidence, the trade-off is that encrypted traffic requires extra processing power, which means a device encrypting traffic typically will be under a greater load and, therefore, will be unable to process as many requests, leaving the device more susceptible to a DoS attack.   Protection against DoS attacks   As mentioned previously, DoS attacks are not simply a network issue; they are an issue for the entire enterprise. When building or upgrading an infrastructure, architects should consider current traffic and future growth. They should also have resources in place to anticipate having a DoS attack launched against their infrastructure, thereby creating a more resilient infrastructure.   A more resilient infrastructure does not always mean buying bigger iron. Resiliency and higher availability can be achieved by spreading the load across multiple devices using dedicated hardware Application Delivery Controllers (ADCs). Hardware ADCs evenly distribute the load across all types of devices, thus providing a more resilient infrastructure and also offer many offloading capabilities for technologies such as SSL and compression.   When choosing a device, architects should consider whether the device offloads some processing to dedicated hardware. When a typical server is purchased, it has a general purpose processor to handle all computing tasks. More specialized hardware such as firewalls and Active Directory Certificates offer dedicated hardware for protection against SYN floods and SSL offload. This typically allows for such devices to handle exponentially more traffic, which in turn means they are more capable to thwart an attack. Since attacks are spread across multiple levels of the OSI model, tiered protection is needed all the way from the network up to the application design. This typically equates to L3/L4 firewalls being close to the edge that they are protecting against some of the more traditional DoS attacks and more specialized defense mechanism for application layer traffic such as Web Application Firewalls (WAFs) to protect Web applications. WAFs can be a vital ally in protecting a Web infrastructure by defending against various types of malicious attacks, including DoS. As such, WAFs fill in an important void in Web application intelligence left behind by L3/L4 firewalls.   As demonstrated, many types of DoS attacks are possible and can be generated from many different angles. DoS attacks will continue to evolve at the same — often uncomfortably fast — rate as our use of technology. Understanding how these two evolutions are tied together will help network and application architects be vigilant and better weigh the options at their disposal to protect their infrastructure. Source: http://defensesystems.com/Articles/2013/12/19/DOS-attacks-complexity.aspx?admgarea=DS&Page=3

Continue reading here:
DDoS attacks get more complex – are networks prepared?

US-CERT warns of NTP Amplification attacks

US-CERT has issued an advisory that warns enterprises about distributed denial of service attacks flooding networks with massive amounts of UDP traffic using publicly available network time protocol (NTP) servers. Known as NTP amplification attacks, hackers are exploiting something known as the monlist feature in NTP servers, also known as MON_GETLIST, which returns the IP address of the last 600 machines interacting with an NTP server. Monlists is a classic set-and-forget feature and is used generally to sync clocks between servers and computers. The protocol is vulnerable to hackers making forged REQ_MON_GETLIST requests enabling traffic amplification. “This response is much bigger than the request sent making it ideal for an amplification attack,” said John Graham-Cumming of Cloudflare. According to US-CERT, the MON_GETLIST command allows admins to query NTP servers for traffic counts. Attackers are sending this command to vulnerable NTP servers with the source address spoofed as the victim. “Due to the spoofed source address, when the NTP server sends the response it is sent instead to the victim. Because the size of the response is typically considerably larger than the request, the attacker is able to amplify the volume of traffic directed at the victim,” the US-CERT advisory says. “Additionally, because the responses are legitimate data coming from valid servers, it is especially difficult to block these types of attacks.” To mitigate these attacks, US-CERT advises disabling the monlist or upgrade to NTP version 4.2.7, which also disables monlist. NTP amplification attacks have been blamed for recent DDoS attacks against popular online games such as League of Legends, Battle.net and others. Ars Technica today reported that the gaming servers were hit with up to 100 Gbps of UDP traffic. Similar traffic amounts were used to take down American banks and financial institutions last year in allegedly politically motivated attacks. “Unfortunately, the simple UDP-based NTP protocol is prone to amplification attacks because it will reply to a packet with a spoofed source IP address and because at least one of its built-in commands will send a long reply to a short request,” Graham-Cumming said. “That makes it ideal as a DDoS tool.” Graham-Cumming added that an attacker who retrieves a list of open NTP servers, which can be located online using available Metasploit or Nmap modules that will find NTP servers that support monlist. Graham-Cumming demonstrated an example of the type of amplification possible in such an attack. He used the MON_GETLIST command on a NTP server, sending a request packet 234 bytes long. He said the response was split across 10 packets and was 4,460 bytes long. “That’s an amplification factor of 19x and because the response is sent in many packets an attack using this would consume a large amount of bandwidth and have a high packet rate,” Graham-Cumming said. “This particular NTP server only had 55 addresses to tell me about. Each response packet contains 6 addresses (with one short packet at the end), so a busy server that responded with the maximum 600 addresses would send 100 packets for a total of over 48k in response to just 234 bytes. That’s an amplification factor of 206x!” Source: http://threatpost.com/us-cert-warns-of-ntp-amplification-attacks/103573

View the original here:
US-CERT warns of NTP Amplification attacks

Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

Dropbox website went offline last night with a hacking collecting calling itself The 1775 Sec claiming responsibility of the attack on the cloud storage company’s website. The 1775 Sec took to twitter just a few moments before Dropbox went down on Friday night claiming that they were responsible. “BREAKING NEWS: We have just compromised the @Dropbox Website http://www.dropbox.com #hacked #compromised” tweeted The 1775 Sec. This tweet was followed by a another one wherein the group claimed that it was giving Dropbox the time to fix their vulnerabilities and if they fail to do so, they should expect a Database leak. The group claimed that the hack was in honour of Aaron Swartz. Dropbox’s status page at the time acknowledged that there was a downtime and that they were ‘experiencing issues’. The hackers then revealed that their claims of a Database leak was a hoax. “Laughing our asses off: We DDoS attacked #DropBox. The site was down how exactly were we suppose to get the Database? Lulz” tweeted The 1775 Sec. The group claimed that they only launched a DDoS attack and didn’t breach Dropbox security and didn’t have access to Dropbox user data. Dropbox claimed that its website was down because of issues during “routine maintenance” rather than a malicious attack. In a statement Dropbox said “We have identified the cause, which was the result of an issue that arose during routine internal maintenance, and are working to fix this as soon as possible… We apologize for any inconvenience.” Just over an hour ago, Dropbox said that its site was back up. “Dropbox site is back up! Claims of leaked user info are a hoax. The outage was caused during internal maintenance. Thanks for your patience!” read the tweet from Dropbox. Source: http://www.techienews.co.uk/974664/dropbox-hits-ddos-user-data-safe-1775-sec-claims-responsibility/

Read More:
Dropbox hits by DDoS attack, but user data safe; The 1775 Sec claims responsibility

The 2014 cyber security roadmap

The burgeoning Internet of Things and smart devices 2014 is likely to be the year that many industries start to cash in on the much-hyped benefits of smart connected devices. But as more devices become IP-enabled, they contribute to the pool of things that can be recruited into botnets or other platforms used for distributed attacks – something which most companies are currently not prepared for, warns Mike Foreman, general manager of security software firm AVG Technologies. ‘Distributing attacks via unmanned smart devices helps make it more difficult to trace the source and easier to overwhelm the target,’ says Foreman. In order to meet the challenge of securely managing hundreds of millions of connected devices and securing the data transmitted between them, Jason Hart, VP of cloud solutions at security specialist SafeNet , says that organisations will need public key infrastructure solutions that combine comprehensive security with scalability and reliability. ‘PKIs, simply put, use digital certificates that can be embedded within devices, giving them the authorisation needed to access different networks,’ explains Hart. BYOD convenience vs. security Companies will need to adopt new tactics to tackle the increasing drawbacks of a BYOD environment, changing their focus from the devices themselves. When it comes to effective device management, says Chris Wysopal, co-founder and chief information security officer of application security expert Veracode , apps, not devices, are the real problem. ‘Companies need to look for MDMs that understand what apps are installed across corporate and BYOD devices, and the risk associated with those applications,’ he advises. Jonathan Foulkes of systems management software firm Kaseya thinks businesses will need to shift the focus away from devices and onto securing and managing data. ‘By “containerising” corporate data and only providing access through secure applications, IT is given full control over policies and the ability to decide which users – rather than devices – are allowed to connect to what data and with what application.’ The true security risks of cloud computing beginning to emerge The horse has left the barn for IT teams dealing with the cloud. Business units are demanding it and building apps there if their IT departments will not – and this is a trend that is set to continue in 2014 as adoption of core applications in the cloud grows. ‘This opens up application change errors that can be totally missed by the security team,’ warns Reuven Harrison, CTO of security policy orchestration company Tufin . ‘It also increases silos and puts the business network at great risk by bypassing traditional IT structures.’ Veracode’s Chris Wysopal stresses that cloud apps will need to follow the same application security practices that the organisation requires for internally built apps, while moving towards end-to-end automation of network changes should free up time to concentrate on monitoring all areas of the network. Controlling the privileged user Without a doubt, one of the biggest mistakes that organisations make is having insufficient control and oversight of the actions of ‘privileged users’, says Paul Ayers, VP EMEA of security firm Vormetric . ‘In 2014, after the Snowden leaks and other high-profile insider threats and data breaches, I expect organisations to increasingly put in place the security procedures and tools that allow them to audit and control the actions of these users,’ he comments. The effects of cyber war and cyber espionage Cyber is the new battlefield, and the fifth element of warfare, with states already pouring a huge range of resources into both defensive and offences capabilities. ‘Within the next couple of years, we will experience an increasing number of cyber attacks resulting in militaristic and economic damage,’ says Jarno Limnell, director of cyber security at McAfee Group security vendor Stonesoft . Rik Ferguson, VP of security research at security vendor Trend Micro , notes that the PRISM revelations will increasingly lead cyber criminals to turn to ‘darknets’ – a class of networks, such as The Onion Router (TOR), that guarantee anonymous and untraceable online access. ‘Law enforcement agencies may not have enough knowledge or experience to deal with cyber crime and will have a hard time tracking criminals in the Deep Web, prompting more investment in the fight against cyber crime,’ he says. Strengthened government agenda on cyber security and new compliance standards Over 2013-14, the UK cabinet office will have invested £180 million in cyber security, increasing this amount to £210 million in 2014-15. The government has announced its intention to back a new kite-mark standard for cyber security, with further details promised early this year. Around the same time, the European Commission will unveil a new directive on data privacy. ‘But while these measures are to be welcomed, organisations will have their work cut out preparing themselves to achieve compliance,’ says Alan Calder, founder of cyber security services provider IT Governance . ‘Add to these changes the multiple compliance challenges arising from recent updates of standards, such as ISO 27001 and PCI DSS, and you quickly have a considerable governance task in terms of planning, resourcing and training.’ The security skills gap The world faces an acute shortage of cyber security professionals who are adequately skilled for today’s threat landscape. According to Alan Calder of IT Governance, in 2014 we will feel the effects of this shortfall more than ever, resulting in yet more spectacular data breaches, as it will be several uncomfortable years before supply meets demand. ‘Large accountancy and professional services firms are, at the moment, heavily investing in IT security talent, which means that SMEs will struggle to compete for the best talent, putting the future of their businesses at risk,’ says Christian Toon, risk and security expert at data protection company Iron Mountain . Toon urges that when recruiting IT security professionals, companies should remember that it’s important to get someone who understands not just the technicalities of the job, but also the psychology of both the individuals they are helping to protect and of the cyber criminals who are attempting to steal information from the business. The ever-increasing sophistication of DDoS attacks The transparency shown by RBS in admitting that it failed to invest properly in its IT systems after DDoS attacks in 2013 is a common refrain amongst many enterprises, large and small. But, says Jag Bains, CTO of DDoS attack prevention firm DOSarrest Internet Security , ‘While each organisation may have multiple reasons for failing to invest, they all share the same notion that they won’t be a target – until they get attacked.’ With DDoS tools becoming more advanced and pervasive, Bains warns that all IT operations should work under the premise that they will be attacked, and so plan accordingly. ‘Every stack and layer within their purview should be reviewed, and they should identify cost-effective cloud solutions for their DDoS, which provide much better performance and mitigation than expensive hardware.’ Catherine Pearce, security consultant at mobile security firm Neohapsis , predicts that DDoS attackers will accelerate a move from simple volumetric attacks to those that take advantage of a site’s specific performance, with the spread of tools that profile specific targets and attack based upon certain weaknesses in configuration or implementation. Smarter analytics to combat cyber threats Martin Borrett, director at the IBM Institute for Advanced Security , believes that analytics will become a vital element in countering new threats, aided by advancements in machine learning algorithms that will further improve data and analytics technologies. ‘Security systems will greatly benefit from real-time correlation across massive structured data, such as security device alerts, operating system logs, DNS transactions and network flows, as well as unstructured data, such as emails, social media content, packet info and business transactions,’ says Borrett. ’Organisations can begin along this path by surveying the additional new data sources available and considering which could be used to improve their security analysis outcomes.’ However, each data source may bring its own challenges, such as the volume, velocity, variety and veracity of data, so it will be important for a business to consider also which skills and tools they have available to manage these issues. Challenges regarding authentication technologies such as 2-factor and biometric ‘With companies slowly adopting BYOD programmes, on-premise software still reigning supreme in many IT environments and big hacking attacks occurring every few weeks, there’s no question that the business world still lags in adopting people-centric technologies across workforces,’ says Phil Turner, VP EMEA at identity management company Okta . ‘As a result, in 2014, as businesses have added more applications and the proliferation of devices in the workplace continues to increase, we are likely to see significant growth in cloud-based identity and asset management (IAM) services that can deliver single sign-on across all applications.’ However, looking forward to the end of 2014, we can expect this to start to change. Multi-factor authentication (MFA) – which requires two or more factors to verify the legitimacy of the user – has taken off and evolved pretty substantially in the past decade. And authentication methodologies are becoming as personalised and specific to the individual as the experiences that they’re trying to access. ‘Customers’ expectations for seamless trusted authentication and the continued dominance of smartphones and smart devices will accelerate the move from legacy hardware one-time password tokens to mobile-friendly, embedded security and contextual access controls,’ says SafeNet’s Jason Hart. ‘We can already see early examples such as Apple’s iTouch of biometric authentication, and investments by vendors such as Samsung to bake enterprise-grade security controls into their KNOX platform.’ Cyber resilience, not cyber security In 2014, we will see savvier organisations relinquish futile hopes of ‘cyber security’ for a more pragmatic drive for ‘cyber resilience’. ‘We are living permanently with an irreducible level of cyber threat,’ says IT Governance’s Alan Calder. ‘As this realisation sinks in, organisations must adapt their strategies to avoid unhelpful restrictions on staff mobility and internet access, while ensuring their ability to recover swiftly when attacks take place.’ Jason Hart of SafeNet reiterates that in the coming year we can expect to see companies move away from the traditional strategy of focusing on breach prevention, and towards a ‘secure breach’ approach. ‘This means accepting that breaches happen and using best practice data protection to guarantee that data is effectively useless when it falls into unauthorised hands,’ he says. ‘So, we can expect to see an increase in the use of encryption that renders any data useless to an unauthorised party.’ Source: http://www.information-age.com/technology/security/123457584/the-2014-cyber-security-roadmap

See original article:
The 2014 cyber security roadmap

How EA, League of Legends & Battle.net Were Brought Down By DDoS Attacks

Last week, a group calling themselves DERP launched DDoS attacks on the servers of a number of the world’s biggest games (and games companies). It seemed like an awfully big list of victims for such a simple and ancient form of attack, but as Ars Technica explain, there was a bit more to it than that. Unlike a standard DDoS attack, which big services like Battle.net and League of Legends would have been able to defeat, the attackers used a new – and obviously incredibly effective – method. “Rather than directly flooding the targeted services with torrents of data”, Ars explains, “an attack group calling itself DERP Trolling sent much smaller sized data requests to time-synchronization servers running the Network Time Protocol (NTP). By manipulating the requests to make them appear as if they originated from one of the gaming sites, the attackers were able to vastly amplify the firepower at their disposal. A spoofed request containing eight bytes will typically result in a 468-byte response to a victim, a more than 58-fold increase.” According to “DoS-mitigation service” Black Lotus, while this sounds bad, it’s easy to protect against. Though, they would say that, wouldn’t they. Source: http://kotaku.com/how-ea-league-of-legends-battle-net-were-brought-dow-1498272633

Original post:
How EA, League of Legends & Battle.net Were Brought Down By DDoS Attacks

DDoS attacks costly for online companies

Distributed denial of service, or DDoS, attacks can be hugely damaging to companies that rely on their online presence for sales and new business, says DDoS mitigation provider, Prolexic. “All businesses are potentially vulnerable as there are no advance warnings of DDoS attacks, and no way to know if and when your business could be targeted,” says sales and innovation director at DRS, Jayson O’Reilly. “However, if your business is dependent on its Web site to service customers, you should have protocols in place to defend against an attack, should it happen.” O’Reilly states that some businesses are more vulnerable, or more likely to be a target, than others, which is largely industry dependent. Retail, financial services and gaming sites are popular targets. “Businesses should establish the likelihood of attack, or if they have already been a target, what sort of volume of attacks they have experienced. If they have experienced attacks, were these prolonged, or particularly strong? These questions can help a business select a suitable level of DDoS protection,” he says. He adds that businesses that find themselves regular targets, and which have a high dependency on their Web sites for business, should consider a level of protection that comes with high service level agreements. “They should select a DDoS mitigation provider that can have a site back up almost instantaneously, and guarantee uptime. However, this is not a cheap exercise.” There are other, less expensive, options too, according to O’Reilly, which come with a choice of protection levels, guaranteeing protection up to a certain level. “This sort of protection is suitable for businesses that experience low level, less lengthy attacks. However, should an attack happen that is above the protection level the company has paid for, they would be on their own,” O’Reilly says. He says smaller businesses which haven’t yet been hit by a DDoS attack can also follow several steps to better prepare themselves in the event of an attack, adding that companies which use dedicated servers have the option of setting up a backup connection, called an out-of-band (OOB) connection, which is essentially a backup path in case of network communication failure. “In the event of the usual network becoming inaccessible, the businesses can use the OOB connection to access the server instead. A hosting provider can add an OOB connection, and at a price that won’t break the bank.” O’Reilly says network monitoring can also be a big help. “A network monitoring system that can pick up anomalous behaviour, such as sudden spikes, can act as an early warning system for a DDoS attack.” Additionally, he advises companies to be aware of where they are most vulnerable, in order to keep an eye on those points, and strengthen them wherever possible. “Add alerts for your weak points, and put plans in place to upgrade the security on these points,” he concludes. Source: http://www.itweb.co.za/index.php?option=com_content&view=article&id=69922:DDoS-attacks-costly-for-online-companies&catid=69

View article:
DDoS attacks costly for online companies

Steam, Blizzard and EA hit by DDoS attacks

There’s something about the new year that gets hackers all excited as the DDoS attacks continue. The last major attack was on 31 December with DERP unleashing their DDoS on World of Tanks, EA, Blizzard, League of Legends and DOTA 2.It looks like the hangovers have worn off as once again they hit EA and Battlefield 4 servers. EA hopped on the case with a response. In what may have been a response to that, we have no idea what’s behind their thinking with all this, another group decided Steam should be the target. We are still seeing reports that Steam is still having issues despite the attack apparently having stopped. And then it was on to BattleNet… All this is being done for shits and giggles but really achieves nothing other than annoy gamers and cause some temporary headaches for server admins. The novelty will probably wear off in a few days but as the individuals involved are being encouraged by Twitter followers expect more outages. Source: http://www.incgamers.com/2014/01/steam-blizzard-ea-hit-ddos-attacks

Continue Reading:
Steam, Blizzard and EA hit by DDoS attacks

Attackers Wage Network Time Protocol-Based DDoS Attacks

Attackers have begun exploiting an oft-forgotten network protocol in a new spin on distributed denial-of-service (DDoS) attacks, as researchers spotted a spike in so-called NTP reflection attacks this month. The Network Time Protocol, or NTP, syncs time between machines on the network, and runs over port 123 UDP. It’s typically configured once by network administrators and often is not updated, according to Symantec, which discovered a major jump in attacks via the protocol over the past few weeks. “NTP is one of those set-it-and-forget-it protocols that is configured once and most network administrators don’t worry about it after that. Unfortunately, that means it is also not a service that is upgraded often, leaving it vulnerable to these reflection attacks,” says Allan Liska, a Symantec researcher in blog post last week. Attackers appear to be employing NTP for DDoSing similar to the way DNS is being abused in such attacks. They transmit small spoofed packets requesting a large amount of data sent to the DDoS target’s IP address. According to Symantec, it’s all about abusing the so-called “monlist” command in an older version of NTP. Monlist returns a list of the last 600 hosts that have connected to the server. “For attackers the monlist query is a great reconnaissance tool. For a localized NTP server it can help to build a network profile. However, as a DDoS tool, it is even better because a small query can redirect megabytes worth of traffic,” Liska explains in the post. Monlist modules can be found in NMAP as well as in Metasploit, for example. Metasploit includes monlist DDoS exploit module. The spike in NTP reflection attacks occurred mainly in mid-December, with close to 15,000 IPs affected, and dropped off significantly after December 23, according to Symantec’s data,. Symantec recommends that organizations update their NTP implementations to version 4.2.7, which does not use the monlist command. Another option is to disable access to monlist in older versions of NTP. “By disabling monlist, or upgrading so the command is no longer there, not only are you protecting your network from unwanted reconnaissance, but you are also protecting your network from inadvertently being used in a DDoS attack,” Liska says. Source: http://www.darkreading.com/attacks-breaches/attackers-wage-network-time-protocol-bas/240165063

Read the article:
Attackers Wage Network Time Protocol-Based DDoS Attacks

Lessons From 5 Advanced Attacks Of 2013

Distributed denial-of-service attacks targeted application and business-logic weaknesses to take down systems; fraudsters used encryption to scramble victims’ data until they paid a ransom; and, attackers increasingly targeted providers as a weak link in the chain of the digital security protecting businesses. In 2013, there were no major revolutions in the way that attackers compromised, cut off, or just plain inconvenienced their victim’s systems, but their techniques and tactics evolved. From more pernicious encryption in ransomware to massive DDoS attacked fueled by reflection, attackers showed that they still had options available in their bag of tricks. “As the criminals have become more savvy and more technically knowledgable and understand the victims’ environments better, they are able to see opportunities that they might otherwise overlook,” says Jeff Williams, director of security strategy for the counter threat unit at Dell SecureWorks, a managed security provider. Based on interviews with experts, here are five advanced attacks from 2013 and the lessons for businesses from those events. 1. Cryptolocker and the evolution of ransomware While many attackers create botnets to steal data or use victim’s machines as launching points for further attacks, a specialized group of attackers have used strong-arm tactics to extort money from victims. In the past, most of these types of attacks, referred to as ransomware, have been bluffs, but Cryptolocker, which started spreading in late summer, uses asymmetric encryption to lock important files. The group behind Cryptolocker has likely infected between 200,000 and 250,000 computers in the first hundred days, according to researchers at Dell SecureWorks. Based on the number of payments made using Bitcoin, the company conservatively estimated that 0.4 percent of victims paid the attackers, but it is likely many times more than minimum take of $240,000, the company stated in an analysis. “What sets it apart is not just the size and the professional ability of the people behind it, but that–unlike most ransomware, which is a bluff–this one actually destroys your files, and if you don’t pay them, you lose the data,” says Keith Jarvis, senior security researcher with Dell SecureWorks. Companies should expect ransomware to adopt the asymmetric-key encryption strategy employed by the Cryptolocker gang. 2. New York Times “hack” and supplier insecurity The August attack on The New York Times and other media outlets by the Syrian Electronic Army highlighted the vulnerability posed by service providers and technology suppliers. Rather than directly breach the New York Times’ systems, the attackers instead fooled the company’s domain registrar to transfer the ownership of the nytimes.com and other media firms’ domains to the SEA. The attack demonstrated the importance of working with any suppliers that could be a “critical cog” in a company’s security strategy, says Carl Herberger, vice president of security solutions for Radware, a network security firm. “You need to have real-time, critical knowledge from your service providers to determine whether they are being attacked and whether you are the intended victim of that attack,” says Herberger. 3. Bit9 and attacks on security providers In February, security firm Bit9 revealed that its systems had been breached to gain access to a digital code-signing certificate. By using such a certificate, attackers can create malware that would be considered “trusted” by Bit9?s systems. The attack, along with the breach of security company RSA, underscore that the firms whose job is to protect other companies are not immune to attack themselves. In addition, companies need to have additional layers of security and not rely on any one security vendor, says Vikram Thakur, a researcher with Symantec’s security response group. “The onus resides with the security firm to prevent successful attacks from happening, but when they fail, a victim should have a plan to bolster their defense,” Thakur says. 4. DDoS attacks get bigger, more subtle A number of denial-of-service attacks got digital ink this year. In March, anti-spam group Spamhaus suffered a massive denial-of-service attack, after it unilaterally blocked a number of online providers connected–in some cases tenuously–to spam. The Izz ad-Din al-Qassam Cyberfighters continued their attacks on U.S. financial institutions, causing scattered outages during the year. As part of those attacks and other digital floods, attackers put a greater emphasis on using techniques designed to overwhelm applications. Such application-layer attacks doubled in frequency in the third quarter 2013, compared to the same quarter a year before, according to denial-of-service mitigation firm Prolexic. Reflection attacks, where attackers use incorrectly configured servers to amplify attacks, grew 265 percent in the same period, according to the firm. The attack against Spamhaus, which reportedly topped a collective 300 Gbps, used reflection attacks via open DNS resolvers to generate the massive flood of traffic. “This technique is still an available option for attackers,” says Radware’s Herberger. “Because there are 28 million vulnerable resolvers, and every resolver needs to be fixed, this problem is not going away any time soon.” 5. South Korea and destructive attacks Companies in both the Middle East and South Korea suffered destructive attacks designed to wipe data from computers. In 2012, Saudi Aramco and other companies in the Middle East were targeted with a malicious attack that erased data from machines, causing them to become unrecoverable. This year, South Korean firms were attacked in a similar manner in a multi-vector attack whose finale was the deletion of master boot records on infected computers. While such attacks have happened in the past, they seem to be more frequent, says Dell SecureWorks’ Williams. “The impact of these attacks have been pretty impressive–30,000 machines needed to be rebuilt in the Saudi Aramco case,” he says. Source: http://www.darkreading.com/advanced-threats/lessons-from-five-advanced-attacks-of-20/240165028

View the original here:
Lessons From 5 Advanced Attacks Of 2013