I started a new security podcast recently and in the first episode, I covered the security breech at DigiNotar, the Dutch certificate authority. One of the prominent findings in the forensic report that was released publicly was the unreliability and unavailability of logs – because, in one case, an administrator erased logs on a server to free up space, and in others, the attacker deleted logs to cover his tracks. Forwarding ALL logs, whether from Windows systems, firewalls, switches, etc, to a central logging server with very limited access is very key to both forensic analysis after an attack, but also proactive alerting that might indicate an attack is underway, giving time to react before data is lost or damaged.
Earlier in my career, I was the IT director for a medium sized enterprise and had responsibility for information security, in addition to networking, server ops, help desk, etc. I was fortunate to be able to start with a mostly clean slate and had the help of many talented and energetic thinkers. The company was a manufacturer of network security software and our flagship product was an intrusion prevention (IPS) application, which meant cost wasn’t an option with determining how many IPS engines to place throughout the network. One of the debates we had was what to do with and how to respond to attacks coming from the Internet. We had implemented a very robust and restrictive firewall infrastructure that was very effective at keeping out attackers. The IPS engines on the Internet side of the firewalls reported a constant barrage of scans, malware propagation attempts and nearly every other type of attack the engine could detect.
Full of good intentions, we prioritized attacks and tried informing abuse contacts for assistance with stopping them. There were far too many to handle manually, and we quickly found that ISP’s and network owners were either overwhelmed with complaints, didn’t care or were complicit in the attacks. Our well intentioned efforts did absolutely nothing. I found that the volume of Internet-based attacks is directly proportional to the number of IP addresses the organization has, and nothing was going to change that. The only one who cared about what happened to our network was us, and we needed to focus on only things we need to care about. And we decided that we could only care about attacks that were successful. The firewalls were effective in their jobs, after all. We could roughly monitor successful attacks by looking at IPS engine traffic on the inside of the firewalls.
As our budding IT shop matured, we entered the age of metrics. We relied heavily on technology to keep us safe, and so didn’t have a lot of process related metrics. To show the value of the security program, we began reporting the number of attack attempts and the number of intrusions. Being a technology company, and a security-oriented one at that, these numbers were very interesting to the management team at first. We would show reports with millions of attacks each month, and always zero successful intrusions. At some point it occurred to me – I don’t, nor was I expected to, do anything different if the number of attack attempts was 100,000 or 100,000,000,000 in a month. It was a truly useless metric. The very few times there were security incidents, the management team was made aware and kept up-to-date real time, so even the successful intrusion metric was pretty useless.
I learned a lot as I matured alongside that IT organization, and one important lesson was only monitor things that will cause you to take action based on the resulting data. Anything else is simply trivia and probably not contributing anything useful to the organization. Syslog is very similar. There is generally good value in archiving logs for reference in the future, but for active monitoring, an assessment should be made about whether the types of things being monitored would cause an action based on a specific event being detected.
I have a few servers at a colocation datacenter for running a number of sites, including this one. I have written before about detecting brute force attacks in logs. I have been watching the attacks continue in my logs, and have noticed a few things:
1. The attacks, as before, are coming from many different sources, nearly simultaneously.
2. It’s interesting that the brute force account and password guessing are so well coordinated – generally I see the same user name tried by multiple hosts sequentially, the moving on to new name, usually in alphabetical order.
3. The attacks are now coming in via multiple vectors. Previously, the attacks were only carried out using ssh connections, but are not also using pop3, imap and ftp.
4. The attacks have started trying to intelligently guess the name of accounts on the server, based on the domain name. In my case, I am using cPanel/WHM, where each domain has a shell account, and generally the user name is some derivation of the domain name. Previously, I would only see a dictionary of names being sequentially tried, but now I see the user name “syslog” being tried many times. From the perspective of an attacker, it’s much better to only have one variable to brute force (the password) rather than two (user name and password).
So, what can I infer about the attackers given the data I have seen?
1. There are some number of botnets being used to guess passwords of accounts, and the zombies are being very well coordinated by some command and control infrastructure.
2. The attackers are trying very hard to infect web servers. Once an attacker has access to modify a site, he can do a number of things; from send spam to integrate browser exploit code into the normal site(s). The latter is particularly interesting, since there is substantial money to be made from installing adware/spyware/other maleware on desktops.
What can be done to counter these threats?
1. Enforce minimum password complexity (8 character minimum is a good start)
2. Choose user names that can’t be directly linked to a domain. For instance, the user name “syslog” would be easily guessable for the domain www.syslog.org. A better name might be “syslog22”.
3. If unique usernames are used as identified in #2 above, logs can be monitored proactively for instances where the real account name is being tried unsuccessfully. If there is not an externally discernible way to identify the user name,appearances of such entries in logs could indicate that the system has been compromised in some other manner.
Ultimately, as with all IT security programs, log monitoring programs are designed to address risks to data confidentiality, integrity and availability. Risks come in many types:
- Hardware failure
- System compromise
- User error
- Rogue administrator
An organization’s program around log & event monitoring needs to be based on the specific risks that exist in that organization. Consider two these two scenarios:
Scenario 1: The IT department of a large public corporation manages a large number of critical business systems, including systems that contain the company’s financial information, payroll information and client data. This company cares deeply about the confidentiality, integrity and availability of it’s applications.
Scenario 2: A small Internet-based retailer of custom soaps is owned by a family. Dave, a member of the family that owns the business, is technically savvy and has colocated a server to run the retailer’s web site. Dave manages the server and the web site. Dave also cares deeply about the confidentiality, integrity and availability of his company’s web site. Read more…
I have discussed in previous posts the importance of administrators using SUDO to provide individual accountability. SUDO provides command-by-command accounting of actions performed by administrators, with logs sent as standard syslog events looking like this:
Feb 4 19:23:23 bsd sudo: jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/bin/ps -x
Feb 4 19:23:34 bsd sudo: jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/usr/bin/vi /etc/passwd
Feb 4 19:23:59 bsd sudo: jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/usr/bin/tail -100 /var/log/messages
We can see pretty clearly all the actions I took above: the user “jerry” performed a number of actions, including one that is potentially concerning: vi /etc/passwd. The action on /etc/passwd requires some investigation.
First, we need to be sure that an administrator can’t cover his tracks by deleting logs. This is best accomplished by streaming the logs to a hardened syslog server, where the administrator doesn’t have the ability to delete logs. Read more…
In a previous post, I wrote about the general use of syslog logs as a method of ensuring compliance with policy. This is a specific example of how one might use syslog to do that.
As IT operations mature, particularly in regulated environments, it is not uncommon for an organization’s security policy to require controls on the use of privileged ID’s. Specifically, the use of the “root” or “administrator” account does not provide individual accountability. A policy requirement might be to require the use of sudo or equivalent technology to allow administrators to perform their jobs without losing individual accountability.
But how can you show that the policy is effective? Analyzing syslog logs are a good way. If it is a policy exception that an administrator would log into a server with “root”, then you simply have look for cases of that occurring. An ssh root login on FreeBSD looks like this:
Aug 7 17:22:00 www2 sshd: Accepted keyboard-interactive/pam for root from 127.0.0.1 port 50496 ssh2
Building a query can be as simple as “grep root /var/log/messages”, or can be performed by any number of syslog analyzers. Of course, root would have the ability to remove the log evidence of his presence, which is a very good reason to relay logs to a central syslog server. It then becomes imperative that administrators with access to root accounts on systems do not also have administrative access to the central syslog server.
Pulling evidence of root logins out of syslog logs from a central server which administrators do not have access is a good control to determine whether administrators are following policy requirements. This solution allows the administrators to retain the root password, but only use it in the event of an emergency. The policy should require reconciliation of any root logins found with known, documented issues. Use of the password without a valid reason will show evidence of policy violation.
An obvious weakness of the syslog network protocol is the ease of spoofing messages into a central syslog server. The default use of UDP as a transport and lack of any sort of authentication, in fact, make it trivial to spoof any part of a syslog message.
The most concerning issue with spoofing is faking the sending host. An attacker can create a lot of chaos by stuffing log files with bogus errors, creating a denial of service potential or an opportunity for the attacker to distract administrators with false alarms while an attack takes place.
Alternatively, syslog can be tunneled over stunnel, as described here.
Maintaining a reliable and secure repository of logs is important for many reasons: establishing a foresnic trail of evidence in the case of fraud or attack, and enabling event correlation across many devices, among others. Particularly in regulated industries, management should enact controls that prevent security, application and system logs from being tampered with.
Many organizations choose to consolidate their logs on to a centralized syslog server. Many devices and just about all UNIX-like operating systems (Linux, free/net/open BSD, Solaris, AIX) support syslog natively. Windows-based systems require a tool to convert event logs to syslog.
Syslog is a simple protocol and is easy to wrap some very effective security around. The goal is remove as many opportunities for the central syslog server to be compromised as practical. There are 3 aspects to hardening a syslog server that we’ll cover:
- The operating system
- The network
- The application
- The users and administrators Read more…
I was just catching up on my reading on Technorati and came across this article that details the ways attackers can cover their tracks upon compromising a Windows server. This article should serve as a warning: if your logs are not moved off to a separate server, you will lose visibility and key evidence in the event of a successful attack. This applies to any type of system, whether Windows, Linux, BSD or any other OS.
I strongly encourage the practice of centralizing logs to a hardened log server. For Windows, there are a bunch of good applications that will export Windows Event Logs out to syslog. I recently took a at logging Windows Events to a syslog server using Snare.
It is important to note that in the event of a successful compromise, the attacker will likely still disable logging and auditing, which will cause the stream of logs to the syslog server to cease. The difference, though, is that the events which were captured during the attack remain on the log server, despite the attacker having deleted the local logs. In the ideal case, the attacker does not disable logging and auditing first, opting to clear the event logs later in the attack, providing more evidence in the centralized logs of what was accessed or modified by the attacker.
There are now a bunch of commercial and open source agents that can run on a Windows system to take in Windows Event Logs and send them off to a syslog server. We’ll be looking at the Snare agent in this post.
As of this writing, Snare is compatible with Windows NT, 2000, XP, 2003 and Vista. There is also an agent available for 64 bit Windows versions.
For my test, I am installing on a Windows XP system. Installation is quite straight forward. There are MSI and scripted installers available on the Snare web site for large scale deployments.
The recommended installation has Snare taking control over the Event Log configuration, to synchronize the configurable logging “Objectives” in Snare with the Event Log settings. Read more…
RSS Error: A feed could not be found at http://www.syslog.org/forum/.xml/?type=rss. A feed with an invalid mime type may fall victim to this error, or SimplePie was unable to auto-discover it.. Use force_feed() if you are certain this URL is a real feed.