January 7, 2009
Several high profile Twitter accounts were recently hijacked:
An 18-year-old hacker with a history of celebrity pranks has admitted to Monday's hijacking of multiple high-profile Twitter accounts, including President-Elect Barack Obama's, and the official feed for Fox News.
The hacker, who goes by the handle GMZ, told Threat Level on Tuesday he gained entry to Twitter's administrative control panel by pointing an automated password-guesser at a popular user's account. The user turned out to be a member of Twitter's support staff, who'd chosen the weak password "happiness."
Cracking the site was easy, because Twitter allowed an unlimited number of rapid-fire log-in attempts.
"I feel it's another case of administrators not putting forth effort toward one of the most obvious and overused security flaws," he wrote in an IM interview. "I'm sure they find it difficult to admit it."
If you're a moderator or administrator it is especially negligent to have such an easily guessed password. But the real issue here is the way Twitter allowed unlimited, as-fast-as-possible login attempts.
Given the average user's password choices -- as documented by Bruce Schneier's analysis of 34,000 actual MySpace passwords captured from a phishing attack in late 2006 -- this is a pretty scary scenario.
Based on this data, the average MySpace user has an 8 character alphanumeric password. Which isn't great, but doesn't sound too bad. That is, until you find out that 28 percent of those alphanumerics were all lowercase with a single final digit -- and two-thirds of the time that final digit was 1!
Yes, brute force attacks are still for dummies. Even the typically terrible MySpace password -- eight character all lowercase, ending in 1, would require around 8 billion login attempts:
26 x 26 x 26 x 26 x 26 x 26 x 26 x 1 = 8,031,810,176
At one attempt per second, that would take more than 250 years. Per user!
But a dictionary attack, like the one used in the Twitter hack? Well, that's another story. The entire Oxford English Dictionary contains around 171,000 words. As you might imagine, the average person only uses a tiny fraction of those words, by some estimates somewhere between 10 and 40 thousand. At one attempt per second, we could try every word in the Oxford English Dictionary in slightly less than two days.
Clearly, the last thing you want to do is give attackers carte blanche to run unlimited login attempts. All it takes is one user with a weak password to provide attackers a toehold in your system. In Twitter's case, the attackers really hit the jackpot: the user with the weakest password happened to be a member of the Twitter administrative staff.
Limiting the number of login attempts per user is security 101. If you don't do this, you're practically setting out a welcome mat for anyone to launch a dictionary attack on your site, an attack that gets statistically more effective every day the more users you attract. In some systems, your account can get locked out if you try and fail to log in a certain number of times in a row. This can lead to denial of service attacks, however, and is generally discouraged. It's more typical for each failed login attempt to take longer and longer, like so:
|1st failed login||no delay|
|2nd failed login||2 sec delay|
|3rd failed login||4 sec delay|
|4th failed login||8 sec delay|
|5th failed login||16 sec delay|
And so on. Alternately, you could display a CAPTCHA after the fourth attempt.
There are endless variations of this technique, but the net effect is the same: attackers can only try a handful of passwords each day. A brute force attack is out of the question, and a broad dictionary attack becomes impractical, at least in any kind of human time.
It's tempting to blame Twitter here, but honestly, I'm not sure they're alone. I forget my passwords a lot. I've made at least five or six attempts to guess my password on multiple websites and I can't recall ever experiencing any sort of calculated delay or account lockouts. I'm reasonably sure the big commercial sites have this mostly figured out. But since every rinky-dink website on the planet demands that I create unique credentials especially for them, any of them could be vulnerable. You better hope they're all smart enough to throttle failed logins -- and that you're careful to use unique credentials on every single website you visit.
Maybe this was less of a problem in the bad old days of modems, as there were severe physical limits on how fast data could be transmitted to a website, and how quickly that website could respond. But today, we have the one-two punch of naive websites running on blazing fast hardware, and users with speedy broadband connections. Under these conditions, I could see attackers regularly achieving up to two password attempts per second.
If you thought of dictionary attacks as mostly a desktop phenomenon, perhaps it's time to revisit that assumption. As Twitter illustrates, the web now offers ripe conditions for dictionary attacks. I urge you to test your website, or any websites you use -- and make sure they all have some form of failed login throttling in place.
Posted by Jeff Atwood
And all those websites have different rules. One requires no numbers, another requires one number, still another requires two numbers in the password. This site is case-sensitive, that one isn't. This site demands special characters, the next one forbids it.
I had to think up my first password in 1976. To this day, no account that I used with that password was broken into. Only one other person knew that password in case something should happen to me. I divorced her last year and have been using a new password scheme since before then.
Of course the other idiocy is requiring numbers in the USERNAME. I mean, if you want to, fine. But I have a pretty unique combination of letters that I've been using for a username for 28 years.
All these different unique and sometimes mutually-exclusive rules increases the risk of a person doing the number one worst thing you can do for security - writing a password down.
At one attempt per second, that would take more than 250 years. Per user!
Who said that???
I can use few hundred Virtual Machines each will try to login, or even use the Grid to do so for me :-) I can ask users on the Grid to download my small utility and millions of users will do the hack for me in just few hours.
It's hard..but not impossible ;-)
every rinky-dink website on the planet demands that I create unique credentials especially for them...
How would a rinky-dink website be able to check whether your credentials were created uniquely for them? I bet you could just use the same password for all your accounts.
My password everywhere is jeffattw00dsuX0rs
Now you just have to figure out my user IDs, and you can post indiscreet pictures of your wife in my name.
(Hint: one of my user IDs is jeffattw00dsuX0rs1)
You have to run a password checker against their accounts and force them to change lousy passwords.
An excellent point; you should *absolutely* dictionary attack your admins before someone else does!
You might also block regular users from entering extremely common passwords, gently prodding them to pick something else. Or add 1 to the end. :)
I can use few hundred Virtual Machines each will try to login, or even use the Grid to do so for me :-)
Each machine is still waiting about a second for the turnaround. Also, if you use a botnet you'll risk overloading the authentication server and thus drawing attention to yourself.
The real story at Twitter was that they didn't check their own passwords. You can't trust people to use good passwords, you have to run a password checker against their accounts and force them to change lousy passwords.
For the variable delay, you could have it delay if the password is wrong.
For example, if the password is wrong on the first try it would be a 0 second wait.
Even if the hacker attempted hundreds of times, resulting in the hacker taking an extended time period with 2^X, if the user logs in correctly the first time then there is no wait.
If we are using ASP.NET Membership stuff, we could simply look at the amount of bogus tries and do a delay ( System.Threading.Sleep() ) in code instead of a redirect.
This limits the hacker but allows the user to go by oblivious.
I would argue that after some quantity of failed log in attempts (let's say over 400) within a short time span (let's say, 24 hours), that *some* kind of alert should have went to someone but maybe I'm just overly paranoid. I would further argue that the user should be notified as well just in case they need to investigate other sites. If a user is foolish and chooses one password for all their sites then everything is now compromised.
I have used delays in the response period, but it was always geometric, not exponential. Just delay=attempts*seconds.
Which is... tada... ARITHMETIC PROGRESSION, not geometric (which is the same as exponential).
Thanks. I knew something didn't sound quite right.
Salts don't stop attackers from using precomputed tables, per se, they just make the precomputed tables larger. The standard 12-bit salt makes the table 4096 times bigger, which looks like a lot, but really isn't.
They also don't do a whole lot for stopping brute force attacks, since even with an infinite length salt, you can still crack the passwords one at a time.
We provide the facebook hacking services but also we could hack yahoo passwords, or crack hotmail email account password. Check out our website http://www.activehackers.com
Hey, the delay-trick is pretty cool, why i havent heard about it proviously? I think this is way more user-friendly than disabling the account and requesting a new password. I think the best dictionary-attack i a combination of dictionary and bruteforce-attacks in the following way:
-take a word from the dictionary
-add a series of numbers and/or non-alphanumerics at the beggining and end
num/non-alphanum dictionary word num/non-alphanum
This increases the attempts per word, but the most passwords are generated like this.
Noone else has mentione it, so I will.
Dictionary Attack != Words from a real dictionary. It's even in the first two sentences of the wikipedia link you use. A dictionary attack doesn't use the Oxford English, or Websters, or anything else. It merely used a pre-defined list. This predefined list is often composed on the most common passwords and weighted in some sort of priority based on what you know (or think you know) about the users.
So it is very likely that it did not take your theoretical two days. It probably started with 7 - 9 character passwords and took much less than two days to find the password for clueless wonder.
Any IT person who gets caught with a password like happiness should be summarily fired and blacklisted, never to work in our field again. Talk about gross negligence....
This is not an easy problem at all. There is a balance between secure and unusable and no I'm not advocating being a twit about it either.
Their are some real problems with some of the solutions mentioned:
* back off algorithms
Sounds nice but it's just plain pants. How many times have you cussed the digital gods when you mistakenly type your sudo password ( *nix heads ) wrong and have to wait that endless 3 secs. Painful! OK it slows down the attacker ( if based on account not session see below ) but it will lose you some members for sure. Every usage report shows you how much we hate waiting online.....
CAN EVERYONE STOP SAY THIS SOLVES THE PROBLEM! It potentially makes it worst. Great I do not know what openID provider 'Bob@stackoverflow' uses. So why not simple DOS ALL the openID providers in the list. Get 1 openID account get access to all these other sites.. Cookie jar or what!
* CAPTCHA + google + yahoo
Simply cleared your cookies and refreshed the page after your 3rd attempt? Yes that's right it's a cookie based check not server side for obvious reasons... In case it's not obvious if it were server side you would need to record each failed login. This would require writes [ to DB / cache ] which are expensive and in fact would make the DOS even more effective especially when all the writes bring your site to a halt.
Code wise dumping of cookies after nth attempt is trivial.
The problem is identifying an attack against a genuine user with a sticky return key. You can't trust the remote machine and server side is expensive. Block IP is a waste of space as any hacker worth their salt will perform a distributed attack ( not to mention dynamicIP used by ISP's ). Plus the added fact that WE the consumer care less about passwords and more about what we want to consume which is why I can't even remember the password to my own site.
Did I hear someone say remote pin pricks / breath analyzers Gattaca style?
update: Google's CAPTCHA at least is now also server / cache side as well. Bahh! They have the money to do that though.. still doesn't change my point.... mumble mumble
Hi, this is the first time that i put my opinion in this blog. I'm from Argentina, so my english is not very well.
Many times when i search something to download, this require login, and i'm so lazy and always put password like:
The WebSite Name like www.website.com
Or passwords like:
Some passwords never change, but this is a administrator-problem not a user-problem.
I hope that my opinion be a good opinion.
If you've ever forgotten your password to a shell account on a *nix machine, you'll notice that it always takes a few seconds to return from a failed login attempt. It's definitely a great trick.
I've heard that most passwords are composed of 1-2 words and a symbol, either between the words or after them/it. Breaking away from that pattern can help ($w!ord%sent3nc_e:?!), but I've found myself generating 12-character randomized passwords and keeping them in Password Safe instead: http://www.schneier.com/passsafe.html
Just a minor what not: 1 password guess per second is a extremely poor estimation.
The estimate is that a modern Core2Duo can try roughly 10,000,000 passwords a second.
I wrote about this issue a few weeks ago on my blog. However my concern was the too strict and sutpid lock out policy implemented by the 'serious' systems (think banks, corporate intranet, etc.) For some reason the guys who design these systems think that the number of trials should be three. No more, no less. And it makes impossible to have a few guesses. Add to this that these systems usually mandate a password change every know and then, which circumvents using strong passwords. Most users will change just one digit anyway. However if you can't remember that your password is number one or two know then you're out of luck with three tries. What do you do after the first failure? Try to retype or try the alternate versions? You can try both but only if you don't make any more mistakes...
Giving 50 tries wouldn't decrease the security, but the nest solutions if of course slowing down the attack (to avoid the DoS you mentioned).
I really don't see the point of having any kind of authentication at all on the vast majority of websites. For example, there is no benefit to the user for having an unguessable password on stackoverflow.com, as opposed to one that is published on Bugmenot.com. Likewise, there is no benefit to the user for giving out a verifiable, permanent e-mail address, as opposed to one that's going to be deleted as soon as the confirmation link arrives.
There is, however, a benefit to the Web site: The Web site's owner gets an opportunity to build a database of verified e-mail addresses, which can be sold to spammers for some extra income.
OpenID, then, is part of the problem, not the solution. When one OpenID provider becomes the most popular, that OpenID provider will have a database larger than those of all the small sites that currently have this data. This provider could command a hefty fee for a subscription to this database, but there would be no benefit to OpenID users.
For Web site users, the best solution would be if most Web functions were unauthenticated.
re: CAPTCHA + google + yahoo
(...) if it were server side you would need to record each failed login. This would require writes [ to DB / cache ] which are expensive and in fact would make the DOS even more effective (...)
Actually, that's not exactly the case. You don't log every failed attempt to the database, only the ones that 'count'. Meaning the ones that happen after the waiting period is over. Let's say I set a waiting period of 20 seconds that is activated after the 5th failed login attempt. Those five logins get stored to the database (an incrementing number of failed attempts and a timestamp logging the last failed attempt).
Now, during that period - the 20 seconds - I don't log anything. If the attacker, a botnet or the user tries to login during that time, I merely return an error message telling him to wait for a bit until the fog clears. I don't even care whether the password is correct or not, he has to wait 20 seconds either way.
Once the waiting period is over, the next login attempt will be authenticated as usual. If it checks out, the user (presumably) is logged in. If the password is incorrect, the account enters a new 20 second waiting period right away, since the user's 'failed logins' counter is still greater than 5 (or whatever my limit might be). That way, the 20 second limit thwarts rapid-fire dictionary and brute force attacks, and the server-side suspend period makes sure DoS attempts don't get to write to the database more than once every 20 seconds.... which isn't going to bring my server to a halt ;-)
So which is better for the delay?
A: System.Threading.Sleep(delay) on the response
B: Map w/ datetime used to deny attempts for the delay period.
What software do you use to create your charts?
Probability of denial of service after failed login attempts would also be reduced by checking IP address. While it would be possible for attackers to use multiple IP addresses this wouldn't increase attack rates by more than a few bits - at least in IP4.
Increasing throttling each time still seems like it would make it easy for an attacker to forcibly lock someone else out of their account.
Inspired by the Twitter hack, I wrote up a technique for implementing rate limiting without having to store every attempted access in a database (using memcached counters) last night, along with an example implementation: http://simonwillison.net/2009/Jan/7/ratelimitcache/
Progressive login delays are a good idea, but if you implement this naively in ASP.NET (i.e. using Thread.Sleep) then I think it will block, won't get returned to the pool, and you open up your whole site/app pool to a very easy DoS.
Presumably if you spin off a non-pooled thread, make that one wait, and have the ASP.NET worker thread wait on that, it will get returned to the pool and you won't have issues, but I'm not positive. Has anybody tried this, or does somebody know of a better way?
This is part of the reason I've switch to fingerprint readers on my computers. I can pick a unique, complex, lengthy password for each separate website, and the fingerprint scanner will remember it.
Not mentioned in the article is the effect of policies that force users to change passwords every 30 days. I believe these policies lead users to choose simpler, more predictable, more sequential passwords. I would rather have a user pick a strong password once, and guard it closely.
You may be limited to a few passwords a day on a 'single' website but don't forget most users will use the
same password accross multiple websites, and are likely to be registered on several of the same kind of websites. Think.. digg, reddit, slashdot, myspace, facebook, bebo, etc..
you get the idea.