(Preface: I wrote this around January of 2007 and simply forgot about it. I wrote it around the time that Marty was writing these posts: here. Also when Richard was writing these posts here.)
I started playing with Sguil again recently, and for the benefit of those that don’t know, Sguil is a Snort based “NSM” system. It uses Snort and some other tools brought together in one interface to provide better analysis and results. The main factor of Sguil is that it runs something like Tcpdump, Snort, or Daemonlogger in order to dump ALL traffic to disk.
I bought my good friend Richard Bejtlich’s “The Tao of Network Security Monitoring” book earlier this year.
Richard has the theory of: “collect all packets, because without all packets the total picture isn’t seen”. In principle, I agree. I used to use this methodology heavily in my last job, and it worked quite well at the time.
While he also goes on to say that IDS “alerting” has its place, without “context” (the surrounding traffic on the network) the alert will make no sense. I don’t know if I rightly agree with that statement as a whole. Let me explain my difference in “context”.
At my company, Sourcefire, we make a product called “RNA” which stands for “Real-Time Network Awareness”. This product coupled with our IPS’s and Defense Center make an extremely powerful tool for analyzing “alert traffic”. Let me give you an example.
Hacker attacks your network with an exploit against IIS servers. If any of you have ever seen something like this before in your analyst lives, you probably know that they will either 1) Prescan your network for open http ports, or 2) just automate the attack so no prescan takes place, just the attack, very quickly.
If you have plain vanilla Snort, you will get an alert for every one of these attempts. Using the “Collection” theory, we would also collect all traffic for these connections and we are able to see which attacks got through the firewall, not which ones didn’t. You can even take it this a step further and rebuild the session to see what took place (if anything). This is a lot of data. We’re talking a pcap file that is containing not only all these hundreds of potential connections, but every other connection that is taking place on the network at the same time.
Now, there is nothing wrong with that if:
A) You have the hard drive space.
B) You have the time.
- Your machines doing the sniffing can keep up.
- You have the personnel to manage all the time, data, and storage.
The problem with it is, at modern network speeds, and the speed at which a program would have to write this stuff to disk, something would give. Now I am not talking at your 500 Mbit/s speeds. I’m talking about the majority of the networks that I deal with that are >1 Gig/s. Whether it be the hard drive, memory, or whatever, but something would buffer somewhere, and more than likely you are going to drop packets. Again, I’m not saying that this is totally a bad idea, I’m just bringing up cons to the pros.
But lets look at it a different way. RNA profiles the hosts on your network, both pre-attack and during, in real-time. RNA knows which machines are running IIS (if any) and which ones aren’t. So it already knows if you will be affected by the IIS exploit attempt.
When these alerts come back to the DC (Defense Center), the DC correlates the RNA event with the Intrusion Sensor alert and the “fat rises to the top” as it were. The DC knows to say “Hey, this attack affects IIS version 5, and only version 5, on Windows...etc..” This is technology that Sourcefire has invented and patented.
So instead of you now having to analyze 100’s of alerts and 1000’s of packets, hey, I only have “these two machines” over here running IIS, and the DC told me that I need to look at these alerts first. Are the other alerts still recorded? Yes, but now I know through the correlation which machines will receive a greater IMPACT from the attack. The two IIS machines. My other Apache boxes aren’t affected at all, so who really cares.
Lets take it a step further. Say the exploit was against IIS 5.0. Well, our two machines are running IIS 6.0. (I’m inferring patch level with this example)
So do we really care? Well, we might like to know, hey, there was an attempt, that’s great, but it doesn’t affect us, we’re not vulnerable to it, lower the IMPACT, and lets move on to the next alert.
If you were collecting packets using the “Immaculate Collection” theory, you’d have to analyze all these streams to make sure that each IIS/Apache/etc.. box returned 404 and whatever else error codes.
Could we do that with Snort? Yes, of course we could. But if RNA knows our network already, then is it important to us? Or is it just informational at this point?
Take it a step further. Think about the exploits that affect browsers, Mail Clients, versions of SSH, telnet, snmp, etc.. RNA already knows these services and applications on your network. Before the attack even takes place.
Single glances allow us to look at these 1000’s of alerts, and say hey, these 2 machines are running IIS, but we’re not vulnerable to the attack. In a matter of seconds.
If you’ve ever heard Marty Roesch speak, you’ll know that it is his belief that “Humans” basically can’t make the decisions for the IDS. Why don’t we let RNA tune it directly? But that’s for a totally different post, one that Marty has covered on his blog as well.
Of course there are strong points to both sides of the discussion. Share your thoughts in the comments.
® Snort, Daemonlogger, RNA, Defense Center, and Sourcefire are all registered trademarks of Sourcefire, Inc.