Tag Archives: ddos-defense

Legal blog site suffered Distributed Denial of Service ‘DDoS’ attack

When a blog that typically attracts 30,000 visitors a day is hit with 5.35 million, its operators had better have been prepared for what seems way too big to be called a spike. The popular SCOTUSblog, which provides news and information about the United States Supreme Court, was put to this test last week after the historic healthcare ruling and it passed with flying colors, thanks to months of planning and a willingness to spend $25,000. “We knew we needed to do whatever it took to make sure we were capable of handling what we knew would be the biggest day in this blog’s history,” says Max Mallory, deputy manager of the blog, who coordinates the IT. The massive traffic spike was somewhat of a perfect storm for SCOTUSblog, which Supreme Court litigator Tom Goldstein of the Washington, D.C., boutique Goldstein & Russell founded in 2002. Not only is the site a respected source of Supreme Court news and information, but in the days leading up to the ruling, buzz about the blog itself began picking up. President Barack Obama’s press secretary named SCOTUSblog as being one source White House officials would monitor to hear news from the court. When the news broke, two of the first media organizations to report it — Fox News and CNN — got the ruling wrong. Many media outlets cited SCOTUSblog as being the first to correctly report that the Supreme Court upheld the Affordable Care Act in a 5-4 decision. But even before “decision day,” as Mallory calls it, the small team at SCOTUSblog knew Thursday would put a lot of strain on the blog’s IT infrastructure. The first indications came during the health care arguments at the Supreme Court in March, when SCOTUSblog received almost 1 million page views over the three days of deliberations. The blog’s single server at Web hosting company Media Temple just couldn’t handle the traffic. “That was enough to crash our site at various points throughout those days and it just generally kept us slow for a majority of the time the arguments were going on,” Mallory says. In the weeks leading up to the decision, Mallory worked with a hired team of developers to optimize the website’s Java code, install the latest plugins and generally tune up the site. Mallory realized that wouldn’t be enough, though. No one knew for sure when the high court would release the most anticipated Supreme Court case in years, but each day it didn’t happen there was a greater chance it would come down the next day. Traffic steadily climbed leading up to the big day: The week before the ruling the site saw 70,000 visitors. Days before the decision, the site got 100,000. “It became clear we weren’t going to be able to handle the traffic we were expecting to see when the decision was issued,” Mallory says. A week before the decision, Mallory reached out to Sound Strategies, a website optimization company that works specifically with WordPress. The Sound Strategies team worked throughout the weekend recoding the SCOTUSblog site again, installing high-end caching plugins, checking for script conflicts and cleaning out old databases from previous plugins that had been removed. The team also installed Nginx, the open source Web server, to run on the Media Temple hardware. All of the improvements helped, but when the decision did not come on Tuesday, July 26, it became clear that Thursday, July 28, the last day of the court’s term, would be decision day. Mallory was getting worried: Earlier in the week SCOTUSblog suffered a distributed denial-of-service (DDOS) attack targeting the website. That couldn’t happen on Thursday, when the court would issue the ruling. “This was our time, it just had to work,” Mallory says. The night before decision day, Mallory and Sound Strategies took drastic measures. Mallory estimated the site could see between 200,000 and 500,000 hits the next day, so the group decided to purchase four additional servers from Media Temple, which Sound Strategies configured overnight. SCOTUSblog ended up with a solution Thursday morning that had a main server acting as a centralized host of SCOTUSblog, with four satellite servers hosting cached images of the website that were updated every six minutes. A live blog providing real-time updates — which was the first to correctly report the news — was hosted by CoveritLive, a live blogging service. As 10 a.m. EDT approached, the system began being put to the test. At 10:03, the site was handling 1,000 requests per second. By 10:04 it had reached 800,000 total page views. That number climbed to 1 million by 10:10, and by 10:30 the site had received 2.4 million hits. Because of the satellite caching, Mallory says, the site was loading faster during peak traffic than it ever had before. In post-mortem reviews, Sound Strategies engineers said they found evidence of two DDoS attacks, one at 9:45 a.m. and another at 10 a.m., which the servers were able to absorb. “We built this fortress that was used basically for two hours that morning,” Mallory says. “It worked and it never slowed down.” Since the healthcare decision, SCOTUSblog has seen higher-than-normal traffic, but nowhere near the 5 million page views the site amassed on the biggest day in the blog’s history. “It was a roller coaster,” Mallory says. “You can have the best analysis, the fastest, most accurate reporting, but if your website crashes and no one can see it that moment, it doesn’t matter.” Source: http://www.arnnet.com.au/article/429473/how_legal_blog_survived_traffic_tidal_wave_after_court_healthcare_ruling/?fp=4&fpid=1090891289

Read the original post:
Legal blog site suffered Distributed Denial of Service ‘DDoS’ attack

Banking Outage Prevention Tips

A series of fresh technology shutdowns this spring at banks around the world reveals the financial services industry still has a long way to go toward ensuring full up time for networks, as well as communicating with the public about why tech glitches have happened and what is being done about them. In May, Santander, Barclays and HSBC were all hit by digital banking outages. Some customers of Barclays and Santander were unable to access accounts online for a time near the end of the month, an outage blamed largely on end-of-the-month transaction volume. At HSBC, an IT hardware failure temporarily rendered ATMs unable to dispense cash or accept card payments in the U.K. Barclays and Santander both apologized for the outages though statements, while HSBC’s approach revealed both the power and peril of social media in such cases. HSBC’s PR office took to social media to communicate updates on the outage, and to also receive criticism about the outage (HSBC, Santander and Barclays did not return queries for comment). After an earlier outage in November, HSBC had set up a social monitoring team to be more proactive about communicating with the public about tech glitches, a move that seemed to have some positive impact, as not all of the Twitter and Facebook postings about the most recent outage were complaints. The basic task of making sure the rails are working, and smoothing things over with customers when systems invariably shut down, is an even more pressing matter considering the propensity for outrage to spread quickly among the public via new channels. “One thing that’s true about outages is we’re hearing more about them. The prevalence of social media use by irate customers and even employees makes these outages more publicized,” says Jacob Jegher, a senior analyst at Celent. Jegher says the use of social media for outage communication is tough – balancing the need to communicate with customers with internal tech propriety is easier said than done. “While it’s certainly not the institution’s job nor should it be their job to go into every technical detail, it’s helpful to provide some sort of consistent messaging with updates, so customers know that the bank is listening to them,” Jegher says. National Australia Bank, which suffered from a series of periodic online outages about a year ago that left millions of people unable to access paychecks, responded with new due diligence and communications programs. In an email response to BTN, National Australia Bank Chief Information Officer Adam Bennett said the bank has since reduced incident numbers by as much as 40 percent through a project that has aimed to improve testing. He said that if an incident does occur, the bank communicates via social media channels, with regular updates and individual responses to consumers where possible. The bank also issued an additional statement to BTN, saying “while the transaction and data demands on systems have grown exponentially in recent years led by online and mobile banking, the rate of incidents has steadily declined due to a culture of continuous improvement…The team tests and uses a range of business continuity plans. While we don’t disclose the specifics, whenever possible we will evoke these plans to allow the customer experience to continue uninterrupted.” While communicating information about outages is good, it’s obviously better to prevent them in the first place. Coastal Bank & Trust, a $66 million-asset community bank based in Wilmington, N.C., has outsourced its monitoring and recovery, using disaster recovery support from Safe Systems, a business continuity firm, to vet for outage threats, supply backup server support in the event of an outage, and contribute to the bank’s preparation and response to mandatory yearly penetration and vulnerability tests. “Safe Systems makes sure that the IP addresses are accessible and helps with those scans,” says Renee Rhodes, chief compliance and operations officer for Coastal Bank & Trust. The bank has also outsourced security monitoring to Gladiator, a Jack Henry enterprise security monitoring product that scours the bank’s IT network to flag activity that could indicate a potential outage or external attack. The security updates include weekly virus scans and patches. Coastal Bank & Trust’s size – it has only 13 employees – makes digital banking a must for competitive reasons, which increases both the threat of downtime and the burden of maintaining access. “We do mobile, remote deposit capture, all of the products that the largest banks have. I am a network administrator, and one of my co-workers is a security officer. With that being said, none of us has an IT background,” Rhodes says. “I don’t know if I could put a number on how important it is to have these systems up and running.” Much of the effort toward managing downtime risk is identifying and thwarting external threats that could render systems inoperable for a period of time. Troy Bradley, chief technology officer at FIS, says the tech firm has noticed an increase in external denial of service attacks recently, which is putting the entire banking and financial services technology industries on alert for outage and tech issues with online banking and other platforms. “You’ll see a lot of service providers spending time on this. It’s not the only continuity requirement to solve, but it’s one of the larger ones,” he says. To mitigate downtime risk for its hosted solutions, FIS uses virtualization to backstop the servers that run financial applications, such as web banking or mobile banking. That creates a “copy” of that server for redundancy purposes, and that copy can be moved to another data center if necessary. “We can host the URL (that runs the web enabled service on behalf of the bank) at any data center…if we need to move the service or host it across multiple data centers we can do that…we think we have enough bandwidth across these data centers to [deal with] any kind of denial of service attack that a crook can come up with,” Bradley says. FIS also uses third party software to monitor activity at its data centers in Brown Deer, WI; Little Rock and Phoenix, searching for patterns that can anticipate a denial of service attack early and allow traffic connected to its clients to be routed to one of the other two data centers. For licensed solutions, FIS sells added middleware that performs a similar function, creating a redundant copy of a financial service that can be stored and accessed in the case of an emergency. Stephanie Balaouras, a vice president and research director for security and risk at Forrester Research, says virtualization is a good way to mitigate both performance issues, such as systems being overwhelmed by the volume of customer transactions, and operational issues such as hardware failure, software failure, or human error. “If it’s [performance], the bank needs to revisit its bandwidth and performance capacity. With technologies like server virtualization, it shouldn’t be all that difficult for a large bank to bring additional capacity online in advance of peak periods or specific sales and marketing campaigns that would increase traffic to the site. The same technology would also allow the bank to load-balance performance across all of its servers – non-disruptively. The technology is never really the main challenge, it tends to be the level of maturity and sophistication of the IT processes for capacity planning, performance management, incident management, automation, etc.,” she says. In the case of operational issues, server virtualization is still a great technology, Balaouras says, adding it allows the bank to restart failed workloads within minutes to alternate physical servers in the environment or even to another other data center. “You can also configure virtual servers in high-availability or fault-tolerant pairs across physical servers so that one hardware failure cannot take down a mission-critical application or service,” Balaouras says. Balaouras says more significant operational failures, such as a storage area network (SAN) failure, pose a greater challenge to network continuity and back up efforts. “In this case, you would need to recover from a backup. But more than likely a bank should treat this as ‘disaster’ and failover operations to another data center where there is redundant IT infrastructure,” she says. Source: http://www.americanbanker.com/btn/25_7/online-banking-outage-prevention-strategies-1050405-1.html

View article:
Banking Outage Prevention Tips

Distributed Denial of Service ‘DDoS’ Attacks: The Zemra Bot

Symantec has become aware of a new Distributed Denial of Service (DDoS) crimeware bot known as “Zemra” and detected by Symantec as Backdoor.Zemra. Lately, this threat has been observed performing denial-of-service attacks against organizations with the purpose of extortion. Zemra first appeared on underground forums in May 2012 at a cost of €100. This crimeware pack is similar to other crime packs, such as Zeus and SpyEye, in that is has a command-and-control panel hosted on a remote server. This allows it to issue commands to compromised computers and act as the gateway to record the number of infections and bots at the attacker’s disposal. Similar to other crimeware kits, the functionality of Zemra is extensive: 256-bit DES encryption/decryption for communication between server and client DDoS attacks Device monitoring Download and execution of binary files Installation and persistence in checking to ensure infection Propagation through USB Self update Self uninstall System information collection However, the main functionality is the ability to perform a DDoS attack on a remote target computer of the user’s choosing. Initially, when a computer becomes infected, Backdoor.Zemra dials home through HTTP (port 80) and performs a POST request sending hardware ID, current user agent, privilege indication (administrator or not), and the version of the OS. This POST request gets parsed by gate.php, which splits out the information and stores it in an SQL database. It then keeps track of which compromised computers are online and ready to receive commands. Inspection of the leaked code allowed us to identify two types of DDoS attacks that have been implemented into this bot: HTTP flood SYN flood The first type, HTTP flood, opens a raw socket connection, but has special options to close the socket gracefully without waiting for a response (e.g. SocketOptionName.DontLinger). It then closes the socket on the client side and launches a new connection with a sleep interval. This is similar to a SYN flood, whereby a number of connection requests are made by sending multiple SYNs. No ACK is sent back upon receiving the SYN-ACK as the socket has been closed. This leaves the server-side Transmission Control Blocks (TCBs) in a SYN-RECEIVED state. The second type, SYN flood, is a simple SYN flood attack whereby multiple connects() are called, causing multiple SYN packets to be sent to the target computer. This is done in an effort to create a backlog of TCB creation requests, thereby exhausting the server and denying access to real requests. Symantec added detection for this threat under the name Backdoor.Zemra, which became active on June 25, 2012. To reduce the possibility of being infected by this Trojan, Symantec advises users to ensure that they are using the latest Symantec protection technologies with the latest antivirus definitions installed. Source: http://www.symantec.com/connect/blogs/ddos-attacks-zemra-bot

Read More:
Distributed Denial of Service ‘DDoS’ Attacks: The Zemra Bot

LulzSec Members Confess To Distributed Denial of Service ‘DDoS’ Attacks to SOCA, Sony and etc

Four alleged members of the LulzSec hacktivist group had their day in British court Monday. Two of the people charged–Ryan Cleary, 20, and Jake Leslie Davis, 19–appeared at Southwark Crown Court in England to enter guilty pleas against some of the charges against them, including hacking the public-facing websites of the CIA and Britain’s Serious Organized Crime Agency (SOCA). All told, Cleary, who’s from England, pleaded guilty to six of the eight charges lodged against him, including unauthorized access to Pentagon computers controlled by the U.S. Air Force. Meanwhile, Davis–who hails from Scotland’s Shetland Islands–pleaded guilty to two of the four charges made against him. The pair pleaded not guilty to two charges of violating the U.K.’s Serious Crime Act by having posted “unlawfully obtained confidential computer data” to numerous public websites–including LulzSec.com, PasteBin, and the Pirate Bay–to encourage or assist in further offenses, including “supplying articles for use in fraud.” They did, however, confess to launching numerous botnet-driven distributed denial-of-service (DDoS) attacks under the banners of Anonymous, Internet Feds, and LulzSec. According to authorities, the pair targeted websites owned by the Arizona State Police, the Fox Broadcasting Company, News International, Nintendo, and Sony Pictures Entertainment. The pair have also been charged with targeting, amongst other organizations, HBGary, HBGary Federal, the Atlanta chapter of Infragard, Britain’s National Health Service, the Public Broadcasting Service (PBS), and Westboro Baptist church. [ Learn about another hacker indictment. See Feds Bust Hacker For Selling Government Supercomputer Access. ] The two other alleged LulzSec members charged Monday are England-based Ryan Mark Ackroyd, 25, as well as a 17-year-old London student who hasn’t been named by authorities since he’s a minor. Both also appeared at Southwark Crown Court and pleaded not guilty to four charges made against them, including participating in DDoS attacks, as well as “encouraging or assisting an offense.” All four of the LulzSec accused are due to stand trial on the charges leveled against them–for offenses that allegedly took place between February and September 2011–on April 8, 2013. According to news reports, the court heard Monday that reviewing all of the evidence just for the charges facing Cleary will require 3,000 hours. Three of the accused have been released on bail. Cleary was not released; he had been released on conditional bail in June 2011, but violated his bail conditions by attempting to contact the LulzSec leader known as Sabu at Christmastime. LulzSec–at least in its original incarnation–was a small, focused spinoff from Anonymous, which itself sprang from the free-wheeling 4chan image boards. LulzSec was short for Lulz Security, with “lulz” (the plural of LOL or laugh out loud) generally referring to laughs gained at others’ expense. According to U.S. authorities, Davis often operated online using the handles topiary and atopiary, while Ackroyd was known online as lol, lolspoon, as well as a female hacker and botnet aficionado dubbed Kayla. What might be read into Ackroyd allegedly posing as a female hacker? According to Parmy Olson’s recently released book, We Are Anonymous, such behavior isn’t unusual in hacking forums, given the scarcity of actual women involved. “Females were a rare sight on image boards and hacking forums; hence the online catchphrase ‘There are no girls on the Internet,’ and why posing as a girl has been a popular tactic for Internet trolls for years,” wrote Olson. “But this didn’t spell an upper hand for genuine females. If they revealed their sex on an image board … they were often met with misogynistic comments.” In related LulzSec prosecution news, Cleary last week was also indicted by a Los Angeles federal grand jury on charges that overlap with some of the ones filed by British prosecutors. At least so far, however, U.S. prosecutors have signaled that they won’t be seeking Cleary’s extradition, leaving him to face charges in the United Kingdom. The shuttering of LulzSec both in the United States and Great Britain was facilitated by the efforts of SOCA, as well as the FBI, which first arrested Anonymous and LulzSec leader Sabu–real name, Hector Xavier Monsegur–in June 2011, then turned him into a confidential government informant before arresting him again, earlier this year, on a 12-count indictment. As revealed in a leaked conference call earlier this year, British and American authorities were working closely together to time their busts of alleged LulzSec and Anonymous operators on both sides of the Atlantic, apparently using evidence gathered by Monsegur. Source: informationweek

View post:
LulzSec Members Confess To Distributed Denial of Service ‘DDoS’ Attacks to SOCA, Sony and etc

Breaking Down a DDoS Attack

Distributed Denial of Service attacks have one goal, to make their target unavailable to its users. And there are certainly a number of different ways these attacks can be carried out. Some of the more common DDoS techniques used by attackers include the use of malware to infect computers used to attack their target from a variety of different sources. One of the most well known examples of a Distributed Denial of Service attack is the infamous MyDoom worm that was sent by email spammers and infected the recipient’s computers. The malware targeted domains with a flood of traffic at a predetermined date and time to bring the site down as it could not handle the flood of incoming connections. More commonly, DDoS attack make use of botnets where computers are turned into zombies, after being infected with malware, and are controlled by a central computer. These botnets can then be used to launch the attack against a target of the attacker’s choosing. The numbers inside and attack But just what does it take to launch a successful DDoS attack? How many computers does an attacker use? How much bandwidth to they need to consume? What is the number of connections it takes to successfully bring a web application down? A recent attack gives us a look into these numbers. While it was not the largest DDoS attack ever launched against a website or web application, a recent week long attack against an Asian e-commerce company in early November was the largest attack in 2011. So just what does it take to bring down an e-commerce platform? Let’s take a look: 250,000 zombie computers coming from a variety of botnets. This is an estimated number based on similar attacks in the past and on the amount of traffic and connections that were used to disable the e-commerce platform that was targeted. The number of computers used in previous attacks were easier to estimate as often times, one large botnet was used in the attack. However since large botnets like Rustock and Cutwail were taken down cybercriminals have gotten wise to larger botnets attracting too much attention so the trend it to use smaller botnets, under 50,000 infected computers, and combining them to launch large scale attacks. 45 Gigabytes per second. At its peak, this DDoS attack flooded the company’s site with up to 45 Gbps. To accomplish this, the botnets’ zombie computers sent an average of 69 million packets per second. While this number is rather disturbing for a network engineer, it isn’t the worst consumption of bandwidth ever used in a DDoS attack. In 2010 the 100 Gbps threshold was broken. If this doesn’t seem overly threatening, consider the fact that 100 Gbps used in a DDoS attack shows an 102% increase of bandwidth consumed by these threats over the course of one year and a 1000% increase in bandwidth use since 2005. Yet while the bandwidth consumed in the largest attack of 2011 is significantly lower than that of the previous year’s attack it doesn’t mean that the scope of the problem is decreasing. In fact, the 2011 attack was much more complex as six different attack signatures were used to attack Layer 3, the network layer, and the application Layer, 7. The sophistication of this dual layered attack required less bandwidth to do just as much damage. 15,000 connections per second. 15,000 connections equals that many people trying to connect to a web site, or web application. Not even the most naive, or aggressive, company would think that they had that many people trying to connect to their e-commerce platform every second. This equals 1,296,000,000 connections in a 24 hour period. That much activity can bring some pretty impressive devices to their knees. So far, the name of the company has not been released due to confidentiality agreements. The reason for the attack also remains unclear. Insiders do believe, however, that the attack was launched by a disgruntled user or a competitor looking to gain an edge in the marketplace using industrial sabotage. Regardless of the reason it is clear that the scale and sophistication of DDoS threats continues to grow. In cases like these above it’s always best to have the best DDoS protection .

Read the original:
Breaking Down a DDoS Attack

“Armenpress” prevented Distributed Denial of Service ‘DDoS’ hacker attack

DDos (Distributed Denial of Service) attack took place in order to thwart the works of “Armenpress” Armenian news agency website, which was prevented by IT specialists of the agency. Earlier Armenpress web site has been attacked. The agency learnt about the hacker attack on June 13 night and informed the enforcement bodies. Armenpress staff continues its work: the agency’s customers receive the news with its full volume. Thanks to the efforts of Armenpress IT specialists the security of agency has been intensified: currently works are carried out to determine the reasons of hacker attack. “Armenpress” expresses gratitude to its colleagues for the support and condemns any kind of hacker attack, qualifying it as a crime in all respects. Source: http://armenpress.am/eng/news/684393/%E2%80%9Carmenpress%E2%80%9D-prevented-ddos-hacker-attack.html

See the original article here:
“Armenpress” prevented Distributed Denial of Service ‘DDoS’ hacker attack

Security fears for ACT’s govt files

The ACT government’s computer systems fought off more than a million attempts to compromise their security in the nine months to April, the territory’s auditor-general has found. And despite a ”denial of service” attack on a key government website just as the audit was coming to an end, auditor Maxine Cooper has found the territory’s information security system is ”robust”. But Dr Cooper’s report found 95 per cent of the 1025 information management systems in the government’s sprawling network were not complying with the requirement to have a security plan and even fewer had undertaken a threat-and-risk assessment. Advertisement: Story continues below Dr Cooper’s office audited the government’s computer network nine months before March, but as the audit period came to a close, the Justice and Community Safety Directorate’s website came under successful attack. The department, which holds sensitive information from the city’s justice agencies, was targeted by the Anonymous group in what is believed to be a case of mistaken identity. The hackers appeared to believe they were attacking the Australian ”justice department”, protesting the federal government’s attitude toward WikiLeaks founder Julian Assange. Dr Cooper warned that unauthorised accessing of information held by the government, including health and medical records, criminal records, case management records and sensitive government documents could cause strategic damage. But Dr Cooper found successful attacks were externally exceptional in an otherwise good security record for the territory but which could be improved if all government websites were internally hosted. ”The protection of the ACT government network is robust,” the Auditor-General said yesterday. ”Shared Services ICT Security Section’s security regime has successfully defended against over one million attempts to access the ACT government’s network in the nine-month period to 31 March, 2012. ”Future similar breaches could be minimised if all directorate and agency websites were hosted on the ACT government network ran ACT government endorsed supplier.” Dr Cooper also wants to see improvements, including more IT bureaucrats reading up on the essential documents governing security. ”While the administrative structures and processes that support whole and procedures are overall satisfactory there are some shortcomings,” Dr Cooper said. ”ICT security governance is based on the Protective Security Policy and Guidelines which is the ACT government’s pre-eminent protective security document. ”However it is unclear if the status of this document is well understood or if adequate processes exist to ensure that directorates and agencies are complying with it.” The auditor was also unhappy with a failure to put plans in place to secure information management systems in the government network ”Despite it being a requirement, only 5 per cent of the ACT government’s 1025 information management systems have a system security plan; and even fewer, some 2.24 per cent have a threat-and-risk assessment,” she said. Source: http://www.canberratimes.com.au/act-news/security-fears-for-acts-govt-files-20120608-201v5.html

See the original article here:
Security fears for ACT’s govt files

Three-Quarters of IT Professionals Fear Negative Brand Impact or Customer Experience as a Result of DDoS Attacks

New Data from Neustar Finds DDoS Attacks Can Cost Retailers More Than $100,000 Per Hour May 15, 2012, 9:30 a.m. EDT STERLING, Va., May 15, 2012 (BUSINESS WIRE) — Neustar, Inc., a trusted, neutral provider of real-time information and analysis to the Internet, telecommunications, entertainment and marketing industries, today released the results of a survey asking 1,000 IT professionals across North America about the business impact associated with distributed denial of service (DDoS) attacks. Among the findings, three-quarters of those surveyed cited impact on customer experience and brand as their greatest fears about the possible implications of DDoS attacks. By unleashing extremely high volumes of malicious Internet traffic or surgically targeting Web applications, hackers seek to shut down a company’s Web resources — typically websites, but also email servers. When hackers unleash a DDoS attack, it carries the potential to exert lasting damage to customer service, online revenue streams and brand reputation. Neustar Survey Results: Executed in Q1 2012, the survey garners responses of IT professionals in more than 25 industries such as finance and banking, retail, telecommunications, travel and IT. Notable findings include: – More than 300 respondents reported they had been attacked – The top concern was the impact attacks have on customer service — with 51 percent listing it as their greatest concern associated with the attacks – 35 percent of those attacked said the attacks lasted more than 24 hours — with 11 percent of attacks lasting more than a week – Specific to retailers, 67 percent who had experienced a DDoS attack pegged the costs of website outages at more $100,000 per hour — equating to loses of $2 million a day “The potential negative implications of DDoS attacks can be devastating for both marketers and IT professionals,” said Alex Berry, senior vice president, Enterprise Services, Neustar. “Many companies have been hit hard – with consequences lasting far longer than the attacks themselves. It’s important that companies are proactive about protecting their online presence, as well as their customers, to ensure the constant delivery of online services and necessary brand vigilance.” Overall, the survey shows that a significant number of companies face the risks of DDoS attacks, yet few have solutions designed specifically to combat attacks, with many relying solely on firewalls and intrusion detection systems. Less than 5 percent of respondents have a purpose-built DDoS mitigation solution, for example, an on-premise DDoS mitigation appliance. This explains why so many attacks last days — in fact, 35 percent respondents experienced attacks that lasted more than 24 hours. Without adequate protection, companies are unable to prevent losses from adding up. While many respondents are aware of the risks to their customer experience and public trust, they haven’t taken the next step to safeguard their reputation. Source: http://www.marketwatch.com/story/three-quarters-of-it-professionals-fear-negative-brand-impact-or-customer-experience-as-a-result-of-ddos-attacks-2012-05-15

View the original here:
Three-Quarters of IT Professionals Fear Negative Brand Impact or Customer Experience as a Result of DDoS Attacks