Policy

Logging and Syslog Best Practices

In this post, I will cover a basic set of best practices for managing logs. Depending on your specific objectives, regulatory requirements, and business constraints, there are likely to be a number of additional best practices.

  • Forward syslog messages from clients to a secure syslog server.
  • Enable NTP clock synchronization on all clients and on the syslog server. It is very important for all systems
    reporting logs to be using the same time server, so that logs are all synchronized. Without doing this, it can be difficult or impossible to accurately determine the sequence of events across systems or applications.
  • Group “like sources” into the same log file. (i.e. mail server, MTA, spamassassin and A/V scanner all report to one
    file)
  • Use an automated tool to establish a baseline of your logs and escalate exceptions as appropriate.
  • Review your records retention policy, if applicable, and determine if anything kept in logs falls under that policy. If so, establish retention periods based on the records policy.  Legal requirements for keeping logs vary by jurisdiction and application.
  • The “sweet spot” for log retention appears to be one year.  Shorter than 1 year, and it is likely that key data would be unavailable in the wake of a long running attack, and longer than one year is most likely wasting disk space.
  • Include logs and log archives in a standard backup process for disaster recovery.
  • Change read/write permissions on logs files so they are not accessible to unprivileged user accounts.

Have more suggestions for logging best practices? Post them in a comment below.

16 comments - What do you think?  Posted by admin - October 4, 2010 at 9:16 pm

Categories: Compliance, Log Management, Logging, Policy   Tags:

Determining What To Monitor

Earlier in my career, I was the IT director for a medium sized enterprise and had responsibility for information security, in addition to networking, server ops, help desk, etc.  I was fortunate to be able to start with a mostly clean slate and had the help of many talented and energetic thinkers.  The company was a manufacturer of network security software and our flagship product was an intrusion prevention (IPS) application, which meant cost wasn’t an option with determining how many IPS engines to place throughout the network.  One of the debates we had was what to do with and how to respond to attacks coming from the Internet.  We had implemented a very robust and restrictive firewall infrastructure that was very effective at keeping out attackers.  The IPS engines on the Internet side of the firewalls reported a constant barrage of scans, malware propagation attempts and nearly every other type of attack the engine could detect.

Full of good intentions, we prioritized attacks and tried informing abuse contacts for assistance with stopping them.  There were far too many to handle manually, and we quickly found that ISP’s and network owners were either overwhelmed with complaints, didn’t care or were complicit in the attacks.  Our well intentioned efforts did absolutely nothing.  I found that the volume of Internet-based attacks is directly proportional to the number of IP addresses the organization has, and nothing was going to change that.  The only one who cared about what happened to our network was us, and we needed to focus on only things we need to care about. And we decided that we could only care about attacks that were successful.  The firewalls were effective in their jobs, after all.  We could roughly monitor successful attacks by looking at IPS engine traffic on the inside of the firewalls.

As our budding IT shop matured, we entered the age of metrics.  We relied heavily on technology to keep us safe, and so didn’t have a lot of process related metrics.  To show the value of the security program, we began reporting the number of attack attempts and the number of intrusions.  Being a technology company, and a security-oriented one at that, these numbers were very interesting to the management team at first.  We would show reports with millions of attacks each month, and always zero successful intrusions.  At some point it occurred to me – I don’t, nor was I expected to, do anything different if the number of attack attempts was 100,000 or 100,000,000,000 in a month.  It was a truly useless metric.  The very few times there were security incidents, the management team was made aware and kept up-to-date real time, so even the successful intrusion metric was pretty useless.

I learned a lot as I matured alongside that IT organization, and one important lesson was only monitor things that will cause you to take action based on the resulting data.  Anything else is simply trivia and probably not contributing anything useful to the organization.  Syslog is very similar.  There is generally good value in archiving logs for reference in the future, but for active monitoring, an assessment should be made about whether the types of things being monitored would cause an action based on a specific event being detected.

Be the first to comment - What do you think?  Posted by admin - March 27, 2010 at 11:32 pm

Categories: Policy, Security   Tags:

Designing A Log and Event Monitoring Program

Ultimately, as with all IT security programs, log monitoring programs are designed to address risks to data confidentiality, integrity and availability.  Risks come in many types:

  • Hardware failure
  • System compromise
  • User error
  • Rogue administrator

An organization’s program around log & event monitoring needs to be based on the specific risks that exist in that organization.  Consider two these two scenarios:

Scenario 1: The IT department of a large public corporation manages a large number of critical business systems, including systems that contain the company’s financial information, payroll information and client data.  This company cares deeply about the confidentiality, integrity and availability of it’s applications.

Scenario 2: A small Internet-based retailer of custom soaps is owned by a family.  Dave, a member of the family that owns the business, is technically savvy and has colocated a server to run the retailer’s web site.  Dave manages the server and the web site.  Dave also cares deeply about the confidentiality, integrity and availability of his company’s web site. Read more…

Be the first to comment - What do you think?  Posted by admin - February 22, 2010 at 10:25 pm

Categories: Compliance, Logging, Policy, Security   Tags:

Configuring SUDO for Effective Activity Monitoring Via Syslog

I have discussed in previous posts the importance of administrators using SUDO to provide individual accountability.  SUDO provides command-by-command accounting of actions performed by administrators, with logs sent as standard syslog events looking like this:

Feb  4 19:23:23 bsd sudo:    jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/bin/ps -x
Feb  4 19:23:34 bsd sudo:    jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/usr/bin/vi /etc/passwd
Feb  4 19:23:59 bsd sudo:    jerry : TTY=pts/0 ; PWD=/usr/home/jerry ; USER=root ; COMMAND=/usr/bin/tail -100 /var/log/messages

We can see pretty clearly all the actions I took above: the user “jerry” performed a number of actions, including one that is potentially concerning: vi /etc/passwd.  The action on /etc/passwd requires some investigation.

First, we need to be sure that an administrator can’t cover his tracks by deleting logs.  This is best accomplished by streaming the logs to a hardened syslog server, where the administrator doesn’t have the ability to delete logs.  Read more…

2 comments - What do you think?  Posted by mutex - February 5, 2010 at 6:26 pm

Categories: Accountability, Compliance, Logging, Policy, Security   Tags:

Building A Program To Manage And Monitor Administrators

Monitoring the activities of privileged users or server administrators is becoming a common requirement in many organizations for a few reasons:

  • Compliance with legal or regulatory requirements, such as PCI, HIPAA, etc
  • Performing outsourcing services to clients who require controls to prevent the service provider’s employees from causing harm to the client.
  • A recent experience where a trusted employee performed some malicious action

In this realm of managing administrators, there are two primary objectives:

  1. Individual accountability
  2. Proactive monitoring of actions taken

Many administrators have the opinion that once you allow a person to act as root, all bets are off.  That is true to a large extent, and will require a fundamental change in thinking for some.  Controls need to be implemented to manage the actions of these privileged users in a manner that is commensurate with the risk of the system(s), applications and data being managed.  Read more…

1 comment - What do you think?  Posted by admin - January 24, 2010 at 5:46 pm

Categories: Compliance, Policy   Tags:

Using Syslog Logs For Validation of Security Policy Compliance

In a previous post, I wrote about the general use of syslog logs as a method of ensuring compliance with policy.  This is a specific example of how one might use syslog to do that.

As IT operations mature, particularly in regulated environments, it is not uncommon for an organization’s security policy to require controls on the use of privileged ID’s.  Specifically, the use of the “root” or “administrator” account does not provide individual accountability.  A policy requirement might be to require the use of sudo or equivalent technology to allow administrators to perform their jobs without losing individual accountability.

But how can you show that the policy is effective?  Analyzing syslog logs are a good way.  If it is a policy exception that an administrator would log into a server with “root”, then you simply have look for cases of that occurring.  An ssh root login on FreeBSD looks like this:

Aug  7 17:22:00 www2 sshd[54036]: Accepted keyboard-interactive/pam for root from 127.0.0.1 port 50496 ssh2

Building a query can be as simple as “grep root /var/log/messages”, or can be performed by any number of syslog analyzers.  Of course, root would have the ability to remove the log evidence of his presence, which is a very good reason to relay logs to a central syslog server.  It then becomes imperative that administrators with access to root accounts on systems do not also have administrative access to the central syslog server.

Pulling evidence of root logins out of syslog logs from a central server which administrators do not have access is a good control to determine whether administrators are following policy requirements.  This solution allows the administrators to retain the root password, but only use it in the event of an emergency.  The policy should require reconciliation of any root logins found with known, documented issues.  Use of the password without a valid reason will show evidence of policy violation.

Be the first to comment - What do you think?  Posted by admin - August 7, 2009 at 5:49 pm

Categories: Compliance, Policy, Security   Tags:

Creative Use of System Logs to Ensure Policy Compliance

Organizations that need to minimize the risks associated with managing technology infrastructure implement robust policies on access management, change management and the like.

Having robust and well understood policies is important and expected of most organizations.  However, organizations such as the FFIEC expects that financial institutions apply detective controls to affirmatively identify policy violations where ever possible.

Read more…

1 comment - What do you think?  Posted by admin - April 10, 2009 at 11:51 pm

Categories: Compliance, Log Management, Policy   Tags:

Recent Posts in the Syslog Forum

RSS Error: A feed could not be found at http://www.syslog.org/forum/.xml/?type=rss. A feed with an invalid mime type may fall victim to this error, or SimplePie was unable to auto-discover it.. Use force_feed() if you are certain this URL is a real feed.