More on DNS I know. May as well be another person beating a dead horse. But I give you pretty: http://www.doxpara.com/?p=1206
It is a video of the patched and not patched world wide. It intrigues me that there is a blinking light on the map of Australia about 3 hours north of Adelaide, I doubt it is Alice Springs, to south, maybe Coober Peadie if my geography serves.
Onto some more supposition by me (mainly in reply to Dan [the guy who discovered the Researched the DNS flaw] here);
I agree with what has been said, that we need more security on an inherintly in-secure network. But some (percieved) anonymity and some plain text is good, and what the internet is all about.
Could you imagine every site moving to https, for starters what is the point, who needs to read my blog through an encrypted channel? Really why, I don't really have any direct post functionality, and only a handful of readers, it is not like I am directing them to blindly do anything either.
Onto DNS, I was thinking the other day of another way to fix the issue. Deploy a port knocking technique on the reply based on the query, so that ports would have to be knocked in the correct order on the DNS server pre accepting back the lookup. Similar to the way a person gets into a safe, knowing the numbers isn't good enough you need to know the sequence. This would stop NAT being an issue as the DNS server can make the request out on all ports getting an auto map back on these ports. And would be more secure as the attacker would have to guess the right ports to knock on the way back, or read the request and then generate the reply and reply back, but if they can do that they are already in the middle and its game is over.
What do you think?
Peace out all, especially Dan, good job.
Showing posts with label kaminsky. Show all posts
Showing posts with label kaminsky. Show all posts
Wednesday, 27 August 2008
Thursday, 14 August 2008
DNS woes continue... sorta
So as I said, and the original discoverer Dan said, it was just a patch. Not a fix, not a be-all and end all solution. A temporary patch. We already know some nat devices break the patch's fix. But from the looks here and here it can be broken. The first link even details how, but there is a caveat. It is not easy, and a lot of bandwidth with low latency is required.
The first article explains how they did it over Gige in 10 hours. So most DNS servers that are doing resolves for clients, are probably not even on 20mbs of bandwidth, and latency 10+ times that of ethernet, not including the clients themselves causing some load. So you could say it would take 10+ times longer to do this over the internet, so 100hours. Someone will hopefully notice at around hour 20… But it isn't that simple, what if some baddie hits a server with a mere 100 clients... (Most botnets are 10 times this size). Chaos again. We need a better fix. I mentioned before some kind of signed DNS, I am the first to admit I have gaps in my knowledge as I have never heard of DNSSEC, now I that have listened to the Blackhat talk I have heard about it. I had a quick look at wikipedia and the official site and it is interesting. Of course windows servers only support it as a secondary, also the glaring-hole of non NSEC3 servers allowing enumeration of sites is just plain silly. Seriously just hash The users request domain "Not Found" and add it to the RFC, done.
I think it should include the option for encrypting replies, may as well, could be useful for higher secure organisations.
This is a very real and very now threat, there are at least two pieces of software out there to attack it, one being the very good, but very newbie friendly metasploit.
Well I am pretty much just re-iterating and expanding on my comments on darknet but there you go.
The first article explains how they did it over Gige in 10 hours. So most DNS servers that are doing resolves for clients, are probably not even on 20mbs of bandwidth, and latency 10+ times that of ethernet, not including the clients themselves causing some load. So you could say it would take 10+ times longer to do this over the internet, so 100hours. Someone will hopefully notice at around hour 20… But it isn't that simple, what if some baddie hits a server with a mere 100 clients... (Most botnets are 10 times this size). Chaos again. We need a better fix. I mentioned before some kind of signed DNS, I am the first to admit I have gaps in my knowledge as I have never heard of DNSSEC, now I that have listened to the Blackhat talk I have heard about it. I had a quick look at wikipedia and the official site and it is interesting. Of course windows servers only support it as a secondary, also the glaring-hole of non NSEC3 servers allowing enumeration of sites is just plain silly. Seriously just hash The users request domain "Not Found" and add it to the RFC, done.
I think it should include the option for encrypting replies, may as well, could be useful for higher secure organisations.
This is a very real and very now threat, there are at least two pieces of software out there to attack it, one being the very good, but very newbie friendly metasploit.
Well I am pretty much just re-iterating and expanding on my comments on darknet but there you go.
Friday, 25 July 2008
DNS Vulnerability...again
There has been some speculation and even backlash on the internet about the recent DNS vulnerability, I posted about it here. Interestingly some people are saying that the vulnerability should have been disclosed when discovered.
This is plain silly. To put it in simple terms with a car analogy (I love car analogies); if a saftey tester discovers that every single Toyota Corolla on the market (the number one selling car, 35million world wide) bursts into flames (props to fight club, note: Corollas don't afaik) if you crash at exactly 35 kilometers per hour. If he just posts this on his blog a few things will happen; everyone will know in about two seconds. The next day 35million Corolla owners will demand a refund, either destroying or severly damaging Toyota and its employees, and hooligans will wander around car parks with sledghammers hoping to hit one with the lucky 35kph speed. Basically what I am saying in a rather confused and overly long analogy is if this had been disclosed pre-vendor patch-release their would have been lost confidence in the whole internet, there would be lost jobs and money from the lost
confidence alone. Then the real fun would begin, prior to the patch being released someone would write a script to take advantage of the vulnerability, this script would then be morphed into several gui tools, and every script kiddie and his bot army would take down sites worldwide for fun and profit.
I am not saying it would have been an internet dooms-day, it could have, but the internet is pretty robust. But it would have been very damaging had the vendor patch not been released, there would have been loss of income and loss of jobs.
I agree with the way it was done, but maybe it could have been done a little sooner if you do a google search DNS cache poisoning is not new in the slightest, have a look at the wiki article. Birthday attacks are a common similar variant, I have even been involved with a cache poisoning issue a couple of times, first back in 2003. Both times I couldn't capture the culprit, there was just too many packets to wade through, but the problems were solved.
I do agree with what I have now read, maybe we need to move across to some kind of signed DNS, either SSL Dns or some kind of signed cert, like gpg and its signed keys.
We could setup the root servers all with a cert or signed key that all DNS servers are set to trust, just roll it into an update or new DNS installs then slowly cut over, then if you want to say use your ISP's servers as forwarders you could simply implictly trust the key or they could buy a signed cert (I can hear Verisign/Thawte licking there lips from here).
Supposedly due to some disclosure there maybe a script kiddie tool out soon to exploit this vulnerability, and with most NAT devices (see routers) turning patched servers into vulnerable ones and some of these routers not being patched/patchable it is only a matter of time. So everyone PATCH your servers please.
This is plain silly. To put it in simple terms with a car analogy (I love car analogies); if a saftey tester discovers that every single Toyota Corolla on the market (the number one selling car, 35million world wide) bursts into flames (props to fight club, note: Corollas don't afaik) if you crash at exactly 35 kilometers per hour. If he just posts this on his blog a few things will happen; everyone will know in about two seconds. The next day 35million Corolla owners will demand a refund, either destroying or severly damaging Toyota and its employees, and hooligans will wander around car parks with sledghammers hoping to hit one with the lucky 35kph speed. Basically what I am saying in a rather confused and overly long analogy is if this had been disclosed pre-vendor patch-release their would have been lost confidence in the whole internet, there would be lost jobs and money from the lost
confidence alone. Then the real fun would begin, prior to the patch being released someone would write a script to take advantage of the vulnerability, this script would then be morphed into several gui tools, and every script kiddie and his bot army would take down sites worldwide for fun and profit.
I am not saying it would have been an internet dooms-day, it could have, but the internet is pretty robust. But it would have been very damaging had the vendor patch not been released, there would have been loss of income and loss of jobs.
I agree with the way it was done, but maybe it could have been done a little sooner if you do a google search DNS cache poisoning is not new in the slightest, have a look at the wiki article. Birthday attacks are a common similar variant, I have even been involved with a cache poisoning issue a couple of times, first back in 2003. Both times I couldn't capture the culprit, there was just too many packets to wade through, but the problems were solved.
I do agree with what I have now read, maybe we need to move across to some kind of signed DNS, either SSL Dns or some kind of signed cert, like gpg and its signed keys.
We could setup the root servers all with a cert or signed key that all DNS servers are set to trust, just roll it into an update or new DNS installs then slowly cut over, then if you want to say use your ISP's servers as forwarders you could simply implictly trust the key or they could buy a signed cert (I can hear Verisign/Thawte licking there lips from here).
Supposedly due to some disclosure there maybe a script kiddie tool out soon to exploit this vulnerability, and with most NAT devices (see routers) turning patched servers into vulnerable ones and some of these routers not being patched/patchable it is only a matter of time. So everyone PATCH your servers please.
Monday, 21 July 2008
Here be dragons
If you haven't seen this yet have a look. Yes the brilliant webcomic xkcd sometime ago did a Map of the internet, I used to have this posted on my wall at work so the newer employees could come have a look when they were visiting to ask a question, it really shows how immense it all is.
But then while looking at one of my bookmarks on network security using darknets for a post on an internet forum I found this: a map of malisciousness. Awesome. It really is interesting to see the concentrations of either compromised machines or general evil-doers in the world. The thing that gets me and got me when I first looked at it was why is the 10.0.0.0 range have so many hits, its a private range, then I looked closer. Why are a few of the "bogan" address ranges getting hits. The only thing I can think is IP spoofing, and if so who would spoof a 10 address. Why not spoof 1.3.3.7 (fun) or something else, everyone knows 10 is internal... anyway post your thoughts.
Oh yeah we haven't quite won the DNS thing yet either. The multi-vendor patch was just that a patch, there are still inherent flaws in the system. Like the new one disclosed with DNS that passes through NAT (see most DNS servers as NAT means some decent IP sharing) it is annoying but it is a fight we have to keep on. See here for the article. It is basically NAT routers being lazy and not letting the port be the random one that the DNS server wants it to be. This randomness doesn't make DNS invulnerable to the poisoning attack I mentioned earlier, it just makes it much, much harder. So to have some routers (people like netgear don't release patches after it is 5+ years old) destory the hard work must be really annoying.
But then while looking at one of my bookmarks on network security using darknets for a post on an internet forum I found this: a map of malisciousness. Awesome. It really is interesting to see the concentrations of either compromised machines or general evil-doers in the world. The thing that gets me and got me when I first looked at it was why is the 10.0.0.0 range have so many hits, its a private range, then I looked closer. Why are a few of the "bogan" address ranges getting hits. The only thing I can think is IP spoofing, and if so who would spoof a 10 address. Why not spoof 1.3.3.7 (fun) or something else, everyone knows 10 is internal... anyway post your thoughts.
Oh yeah we haven't quite won the DNS thing yet either. The multi-vendor patch was just that a patch, there are still inherent flaws in the system. Like the new one disclosed with DNS that passes through NAT (see most DNS servers as NAT means some decent IP sharing) it is annoying but it is a fight we have to keep on. See here for the article. It is basically NAT routers being lazy and not letting the port be the random one that the DNS server wants it to be. This randomness doesn't make DNS invulnerable to the poisoning attack I mentioned earlier, it just makes it much, much harder. So to have some routers (people like netgear don't release patches after it is 5+ years old) destory the hard work must be really annoying.
Monday, 14 July 2008
DNS vulnerabilites and Sydney IT Security Group.
As you may or may not have heard there was a big update released for basically the whole internet. See here and here for a test of your own dns.
Basically it boils down to a bad guy being able to put incorrect entries into your ISP or works DNS cache that would point you to the wrong site. So instead of going to google.com it could take you to a hackers version, or whatever. This would also effect email.
Now this kind of thing does happen occasionally, but this was seen as such a big issue (it could basically destroy the internet if unchecked and unpatched), that CERT who handles these issues let all the Vendors and developers know. Giving them time to write a patch for release on the same day. Very, very impressive.
Not only Microsoft but Unix, Linux, BSD , Cisco, Checkpoint, all of them released a patch for their varied DNS implementations. Yahoo who uses an older *nix implementation of DNS, Bind8 managed to simply comit to abandoning it in favour of the newer patched Bind9.
The question I put forward, is this finally a time of security as an institution. Security how it should be done, globablly. Sure it is still relying on Admins at the other end, but with Auto updates being the norm, it should be fine. This to me seems a step in the right direction, and I am sure even a couple years ago this wouldn't have happened. Will this one day lead us to a security utopia free of vulnerabilites and insecurites, no. But it may lead to sharing and assistance cross platform.
Speaking of security, there is talk of an IT Security group being started up in Sydney, and I maybe taking the reigns. It will be sponsored by Microsoft but if I take the reigns I plan on being vendor neutral, all-be-it Microsoft has some nice claims to fame, and even with all their foibles and hatred that is flung at them, they do try and do some stuff right. Operating systems are tools, you should use the right tool for the right job.
Basically it boils down to a bad guy being able to put incorrect entries into your ISP or works DNS cache that would point you to the wrong site. So instead of going to google.com it could take you to a hackers version, or whatever. This would also effect email.
Now this kind of thing does happen occasionally, but this was seen as such a big issue (it could basically destroy the internet if unchecked and unpatched), that CERT who handles these issues let all the Vendors and developers know. Giving them time to write a patch for release on the same day. Very, very impressive.
Not only Microsoft but Unix, Linux, BSD , Cisco, Checkpoint, all of them released a patch for their varied DNS implementations. Yahoo who uses an older *nix implementation of DNS, Bind8 managed to simply comit to abandoning it in favour of the newer patched Bind9.
The question I put forward, is this finally a time of security as an institution. Security how it should be done, globablly. Sure it is still relying on Admins at the other end, but with Auto updates being the norm, it should be fine. This to me seems a step in the right direction, and I am sure even a couple years ago this wouldn't have happened. Will this one day lead us to a security utopia free of vulnerabilites and insecurites, no. But it may lead to sharing and assistance cross platform.
Speaking of security, there is talk of an IT Security group being started up in Sydney, and I maybe taking the reigns. It will be sponsored by Microsoft but if I take the reigns I plan on being vendor neutral, all-be-it Microsoft has some nice claims to fame, and even with all their foibles and hatred that is flung at them, they do try and do some stuff right. Operating systems are tools, you should use the right tool for the right job.
Subscribe to:
Comments (Atom)