2015-04 Logging, Splunk

Logging in Linux

Introduction

Linux systems use the syslog standard as the preferred method for system logs. An informal standard for many years, these were codified in RFC 3164 and then updated in RFC 5425. Various tools have been used to implement the syslog standard; for many years the most common was called syslog. That tool had a number of shortcomings, which led to the development of alternative implementations, including syslog-ng and rsyslog. The latter of these is used on all of the images provided to the class, though in slightly different versions. CentOS 6.2 uses rsyslog 4.6.2, while Ubuntu 12.04 and Mint 13 use rsyslog 5.8.6.

Programs running in a Linux environment can dispatch a log message using syslog. Each message is assigned two values- a facility, which identifies the type of message, and a severity, which determines its importance. The facilities are

  • auth: Formerly used for security/authorization messages, but deprecated in favor of authpriv.
  • authpriv: Used for security/authorization messages.
  • cron: Used for time based services, including the clock daemon, cron.
  • daemon: Used for system daemons without separate facility values
  • ftp: Used for the ftp server.
  • kern: Used solely for kernel messages.
  • local0, local1, …, local7: Available for local system use.
  • lpr: Used for the print subsystem.
  • mail: Used for the mail server.
  • news: Used for the news server (NNTP; see e.g. RFC 977).
  • syslog: Used for messages generated by the syslog server itself.
  • user: Default facility; used for generic messages.
  • uucp: Used for the (now obsolete) Unix to Unix Copy system (UUCP).

The priorities are, in decreasing order of importance:

  • emerg (emergency)
  • alert
  • crit (critical)
  • err (error)
  • warning
  • notice
  • info (informational)
  • debug

Open the configuration file /etc/rsyslog.conf from your CentOS 6.2 machine, and take a look at the section marked rules.

#### RULES ####

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog

# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 *

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

Before we even look at the documentation for rsyslog, can you determine where most log messages are sent? Now do you know why when we were setting up BIND, we looked through /var/log/messages for error reports?

Though this is how our Linux system was set up, we do not have to keep it this way. Create the file /var/log/testlog; then based on your reading of the rsyslog.conf file, modify it so that all messages get sent to that file. Save your changes, and restart the rsyslog service by running

[root@delphi ~]# service rsyslog restart
Shutting down system logger:                               [  OK  ]
Starting system logger:                                    [  OK  ]

When this is complete, take a look at your file /var/log/testlog; it should contain (at least) the log message saying that rsyslog was restarted.

[root@delphi ~]# tail /var/log/testlog 
Feb  8 12:35:21 delphi kernel: imklog 4.6.2, log source = /proc/kmsg started.
Feb  8 12:35:21 delphi rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" 
x-pid="2846" x-info="http://www.rsyslog.com"] (re)start

Also examine the file /var/log/messages to see if any messages were sent to both files. What conclusions do you draw?

Modify your /etc/rsyslog.conf file so that only messages from local1 of priority err or higher are sent to the file /var/log/testlog. Restart rsyslog, and check that things work as they ought.

Note that though rsyslog is quite complex, it is also solidly documented. You can take a look at the file /usr/share/doc/rsyslog-4.6.2/manual.html for details.

Sending Custom Log Messages

Take a look at the man page for the tool logger.

[zathras@delphi ~]$ man logger

LOGGER(1)                 BSD General Commands Manual                LOGGER(1)

NAME
     logger - a shell command interface to the syslog(3) system log module

SYNOPSIS
     logger [-isd] [-f file] [-p pri] [-t tag] [-u socket] [message ...]

DESCRIPTION
     Logger makes entries in the system log.  It provides a shell command
     interface to the syslog(3) system log module.

...

Using this tool, we can craft log messages with specified facilities and priorities. Hey, wait a minute. You mean any user can send any log message they want?

[zathras@delphi ~]$ logger -p local1.err 'I can writez logz?'

Yes!

[root@delphi ~]# tail -n1 /var/log/testlog 
Feb  8 12:42:30 delphi zathras: I can writez logz?

If you want even more control you can also write your own program to interface directly with the logging system. Documentation to do so is provided in the man subsystem; take a look at man 3 syslog to see the details.

Consider the following C program

#include<syslog.h>

int main(int argc, char* argv[])
{
   const char log_ident[] = "Log Testing Program";
   const int log_option = LOG_NDELAY | LOG_PID;
   const int log_facility = LOG_LOCAL1;
   openlog(log_ident, log_option, log_facility);
   
   syslog(LOG_CRIT, "I just sent a critical error!");

   closelog();

   return(0);
}

Compile it and run it in the usual fashion (as a non-root user)

[zathras@delphi ~]$ gcc log.c -o log -Wall -pedantic
[zathras@delphi ~]$ ./log 

Interestingly, though the user seldon can write to the logs, that user cannot read the logs:

[zathras@delphi ~]$ tail /var/log/messages 
tail: cannot open `/var/log/messages' for reading: Permission denied

However, checking the logs as root shows that the message was sent

[root@delphi ~]# tail -n2 /var/log/messages 
Feb  8 12:42:30 delphi zathras: I can writez logz?
Feb  8 12:49:35 delphi Log Testing Program[3131]: I just sent a critical error!

Suppose we modify the program slightly

#include<syslog.h>

int main(int argc, char* argv[])
{
   const char log_ident[] = "named [31337]";
   const int log_option = LOG_NDELAY ;
   const int log_facility = LOG_SYSLOG;
   openlog(log_ident, log_option, log_facility);
   
   syslog(LOG_CRIT, "I just sent a critical error!");

   closelog();

   return(0);
}

Now when this is run, it will look to the sysadmin that the named process, with PID 31337, just had a critical error:

[root@delphi ~]# tail -n3 /var/log/messages 
Feb  8 12:42:30 delphi zathras: I can writez logz?
Feb  8 12:49:35 delphi Log Testing Program[3131]: I just sent a critical error!
Feb  8 12:51:46 delphi named [31337]: I just sent a critical error!

Just think of the laughs you can have!

Aggregating Multiple Logs

Let’s learn how to centralize the logs from multiple sources on a single host. Before we learn how to do this, let us consider why. Suppose that you are managing a real network; then you are going to have many different systems. In the case of an error or an intrusion and the absence of a centralized logging system, then you would need to visit the logs for each system individually; moreover if the issue crosses multiple systems (as might be expected from a real intrusion) then you would have multiple files to examine and correlate. Putting the logs from multiple systems in a single place gives us much better insight into our network. Further, if an attacker does compromise a system, one thing that they want to do is to erase their tracks from the system logs. If these are kept on a different system, then the attacker’s job is much more difficult.

In this example, let’s configure an Ubuntu system (10.0.2.3) to send its logs to a CentOS system (10.0.2.13).

All of the different syslog implementations allow for remote logging, however the structure of the configuration files vary with the precise distribution.

The CentOS 6.2 configuration is probably the simplest, as it is all contained in the single file /etc/rsyslog.conf.

Mint 13 and Ubuntu 12.04 end the file /etc/rsyslog.conf with a directive to include all of the files in the directory /etc/rsyslog.d

#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf

That directory contains two files, 50-default.conf which determines where log messages of a given priority and facility are sent, and the file 20-ufw.conf that sends all log messages that begin with the text "[UFW " to a separate file. UFW is the firewall configuration tool for Ubuntu machines. Note also that to start, stop, restart, or check the status of a service, the preferred method is to use the service command in the form

zathras@achilles:~$ sudo service rsyslog status
rsyslog start/running, process 876

Note also that Mint 13 and Ubuntu 12.04 systems store most log messages in the file /var/log/syslog.

The original standard for syslog only allowed for sending and receiving remote logs via UDP on port 514. To configure our syslog server to do so, instead of providing a file name as the destination for our logs, we provide the ip address, preceded by “@”; so if we add the line

*.*     @10.0.2.13

to our syslog configuration file, we are sending all messages, regardless of facility or priority to the host 10.0.2.3 via UDP on port 514. Since the sending system is an Ubuntu system, add this line to the end of /etc/rsyslog.d/50-default.conf.

We want to test this setup, but because we are now going to be sending network traffic, we will use a packet sniffer like Wireshark to actually see the packets. Fire up Wireshark on the Ubuntu system.

Here is the packet capture for traffic observed after we restarted the log service; note that we used a Wireshark filter so that we only see the traffic that interests us.

Screenshot from 2014-02-08 17^%06^%33

There are some major and some minor issues here; let’s start with the minor one. Note the highlighted packet; this is a syslog error; checking the log it says

zathras@achilles:~$ tail -n3 /var/log/syslog
Feb  8 17:04:11 achilles rsyslogd: rsyslogd's groupid changed to 103
Feb  8 17:04:11 achilles rsyslogd: rsyslogd's userid changed to 101
Feb  8 17:04:11 achilles rsyslogd-2039: Could not open output pipe 
'/dev/xconsole' [try http://www.rsyslog.com/e/2039 ]

This last error is there because Ubuntu expects /dev/xconsole to be installed but it is not installed by default. Comment out the line

daemon.*;mail.*;\
	news.err;\
	*.=debug;*.=info;\
	*.=notice;*.=warn	|/dev/xconsole

in /etc/rsyslog.d/50-default.conf to make it leave us alone.

The bigger issue is that if you examine the destination log server (10.0.2.3), you will see that nothing has appeared in the logs. Of course, this was expected; the packet capture clearly shows the ICMP messages telling 10.0.2.13 that the UDP port on 10.0.2.3 was closed.

To configure the server that will receive the log messages, we must load the required module to receive UDP traffic, then tell it to listen on port 514. In the CentOS 6.2, Mint 11, or Ubuntu 10.04 /etc/rsyslog.conf file, these lines already exist and simply need to be uncommented:

# Provides UDP syslog reception
$ModLoad imudp.so
$UDPServerRun 514

Restart the rsyslog server on the listening host, and be sure to also open the proper port in the firewall. At this point, you will see all log messages on the first host arrive on the listening host. Done correctly, a message like

zathras@achilles:~$ logger -p local1.err 'This is a log message on achilles 
@ 10.0.2.3'

on the first system (achilles @ 10.0.2.3) will appear on the log server (delphi @ 10.0.2.13) as

[root@delphi ~]# tail -n1 /var/log/messages
Feb  8 17:21:05 achilles zathras: This is a log message on achilles @ 10.0.2.3

There is a downside to receiving logs though. We have already seen how to spoof logs, at least if we can run programs on the system. If the system is accepting logs remotely though, we can do more.

Try the following on your Kali box, where the IP address is the address of your log server.

root@kali64:~# nc -w1 -u 10.0.2.13 514 'This should be fun'

What entries (if any) appear in your log?

nc is the netcat command- it is well worth learning what we can do
with this command. If you want to specify the facility and/or priority via netcat, take a look at
http://linux.byexamples.com/archives/412/syslog-sending-log-from-remote-servers-to-syslog-daemon

There are a few disadvantages to using UDP for logs, the first of which is that it does not guarantee delivery. We can use rsyslog to send logs via TCP; to do so add the entry

facility.priority @@ip.address:port

to /etc/rsyslog.conf. Now we use the "@@" instead of "@" to indicate TCP.

To enable rsyslog to receive remote logs (TCP), include the lines

# Provides TCP syslog reception
$ModLoad imtcp.so  
$InputTCPServerRun 514

in your rsyslog configuration files. (These lines already exist (commented out) for the provided distributions). Note that we have selected TCP port 514 to receive the logs, but we could have selected other ports.

Again, open the firewall, and restart rsyslog; check that your log messages arrive.

What does a packet capture of TCP syslog traffic show?

What happens if you try

root@kali64:~# nc -w1 10.0.2.13 514 <<< "This should be more fun"

remember that the log server is at 10.0.2.13!

Log Rotation

The logs generated cannot be kept indefinitely; as they continue to expand in size they will eventually consume all system resources. The logrotate tool is used on linux systems to compress, archive, rotate and delete log files.

Read the man page for logrotate.

Take a look at the file /etc/logrotate.conf, and answer the following:

  • How often are the log files rotated?
  • How long are log files kept as archives?
  • Are the log files compressed?
  • Are their any special features for the log rotation of the DNS server logs? Did you look in the logroatate.d subdirectory?
  • What about the web server logs?

Logging in Windows

Event Viewer

The primary tool for log viewing in Windows is the Event Viewer. Start by taking a look at a Windows 2012 system, say a domain controller. To start event viewer, choose either

  • Start → All Apps → Event Viewer
  • Server Manager → Tools → Event Viewer
  • Run the command eventvwr.msc

WinLog1

The server shows three sets of logs common to all Windows systems- the Application log, the Security Log, and the System log. There are also two new logs, the Setup log and the Forwarded Events log.

Take a look at the security log; you are likely to see a number of 4624 Logon events as well as a number of 4634 Logoff events. Microsoft has provided a description of the various security events in KB 947226. Because of the fine-grained nature of Windows logs, these entries do not correspond to a user sitting down at the machine and logging on; rather these are typically service accounts logging on to perform a task and then logging right back out. You can find out more about each event by double clicking it in the Event Viewer; here is a typical Windows 4624 Logon event:

WinLog2

To give you a better feel for the Windows logs, see what happens in the domain controller log when we do something relatively common. Select a workstation other than your domain controller(s) and log in to that machine as a domain user. Now go back to the security log and see what we find. (Be sure to refresh your log view!) Find the event that corresponds to your account login; be sure to identify the EventID. You may wish to use the Find entry in the action pane.

Once you have found the log on entry, go ahead and log out of the workstation. Can you find a corresponding logoff entry in the domain controller logs? Are you sure you have the right event? Take a very close look at the time stamps of the logon event and the logoff event, and be sure to compare them to the actual time you logged off. Do you notice anything?

The question of exactly how Windows keeps track of account logons and logon/logoff events is more complex than would appear. Fortunately, Randy Franklin Smith has a nice article available on the EventTracker blog that explains some of these points. Unfortunately, we do not have sufficient class time to delve into these issues in detail.

You may wish to compare the log on and log off process noted above with what occurs when you log on from a Linux machine that you have connected to the domain.

Remote Viewing of Windows Logs

It is possible to view use event viewer on one computer to view the logs on another Windows computer. From Event Viewer, select Action → Connect to Another Computer. Be sure that you have selected Event Viewer (Local) in the navigation pane, or the option to connect to another computer will not appear in the Action menu. Enter the name, and the account details (if different) for the other machine.

In general, this process will fail.

Did you say fail?

What could the problem be?

If you said “Firewall” pat yourself on the back.

To enable these connections, you must adjust the firewall on the remote computer.

One other fun feature. Windows can sometimes get confused whether you mean a local account or a domain account, even when it is explicitly specified. So, if you try to adjust the firewall settings as the domain user zathras, who is a member of the domain admins group, sometimes authentication will fail, presumably because there is a local user also named zathras. However, if you try to make the changes with the domain user mcole who is also a member of the domain admins group but not a local user, then life is good. I have seen the same issue affecting connections from Event Viewer back to the local system. The moral of the story is that you want to work with a domain admin account that is not zathras.

Firewall

Auditing Policies

Windows machines have an auditing policy that is set locally, as well as policies that they can inherit from GPOs, including domain policies as we have already seen.

To find the local auditing policy, navigate Control Panel → System and Security → Administrative Tools → Local Security Policy → Security Settings → Local Policies → Audit Policy.

What is the default local audit policy?

By default, significant auditing policies are set at the domain controller OU level. [Which override the local policies!]

On Windows Server 2008, navigate Start → Administrative Tools → Group Policy Management. What are the default domain level OU group policy settings for auditing?

With Windows Server 2008 and Windows Vista, Microsoft introduced a a new hierarchy for organizing audit policies. Rather than the nine broad categories we have seen so far, we can instead manage policies on a much finer level. Unfortunately, they did not update the GUI tools in these OSes to view these setting. The way to get at these settings is directly from the command line. Indeed, run the command

C:\Windows\system32>auditpol /get /category:*

What? “A required privilege is not held by the client”? Remember that, by default command prompts, even command prompts started by an administrator are not command prompts with administrator privileges. You must right-click on the entry in the start menu, and select “Run as Administrator” and pass through UAC first.

C:\Windows\system32>auditpol /get /category:*
System audit policy
Category/Subcategory                      Setting
System
  Security System Extension               No Auditing
  System Integrity                        Success and Failure
  IPsec Driver                            No Auditing
  Other System Events                     Success and Failure
  Security State Change                   Success
Logon/Logoff
  Logon                                   Success
  Logoff                                  Success
  Account Lockout                         Success
  IPsec Main Mode                         No Auditing
  IPsec Quick Mode                        No Auditing
  IPsec Extended Mode                     No Auditing
  Special Logon                           Success
  Other Logon/Logoff Events               No Auditing
  Network Policy Server                   Success and Failure
  User / Device Claims                    No Auditing
Object Access
  File System                             No Auditing
  Registry                                No Auditing
  Kernel Object                           No Auditing
  SAM                                     No Auditing
  Certification Services                  No Auditing
  Application Generated                   No Auditing
  Handle Manipulation                     No Auditing
  File Share                              No Auditing
  Filtering Platform Packet Drop          No Auditing
  Filtering Platform Connection           No Auditing
  Other Object Access Events              No Auditing
  Detailed File Share                     No Auditing
  Removable Storage                       No Auditing
  Central Policy Staging                  No Auditing
Privilege Use
  Non Sensitive Privilege Use             No Auditing
  Other Privilege Use Events              No Auditing
  Sensitive Privilege Use                 No Auditing
Detailed Tracking
  Process Creation                        No Auditing
  Process Termination                     No Auditing
  DPAPI Activity                          No Auditing
  RPC Events                              No Auditing
Policy Change
  Authentication Policy Change            Success
  Authorization Policy Change             No Auditing
  MPSSVC Rule-Level Policy Change         No Auditing
  Filtering Platform Policy Change        No Auditing
  Other Policy Change Events              No Auditing
  Audit Policy Change                     Success
Account Management
  User Account Management                 Success
  Computer Account Management             No Auditing
  Security Group Management               Success
  Distribution Group Management           No Auditing
  Application Group Management            No Auditing
  Other Account Management Events         No Auditing
DS Access
  Directory Service Changes               No Auditing
  Directory Service Replication           No Auditing
  Detailed Directory Service Replication  No Auditing
  Directory Service Access                No Auditing
Account Logon
  Kerberos Service Ticket Operations      No Auditing
  Other Account Logon Events              No Auditing
  Kerberos Authentication Service         No Auditing
  Credential Validation                   No Auditing

To manage the policies on a box with the command-line tool, start by running

C:\Windows\system32>auditpol /?
Usage: AuditPol command [<sub-command><options>]


Commands (only one command permitted per execution)
  /?               Help (context-sensitive)
  /get             Displays the current audit policy.
  /set             Sets the audit policy.
  /list            Displays selectable policy elements.
  /backup          Saves the audit policy to a file.
  /restore         Restores the audit policy from a file.
  /clear           Clears the audit policy.
  /remove          Removes the per-user audit policy for a user account.
  /resourceSACL    Configure global resource SACLs


Use AuditPol  /? for details on each command

to see the command lne options. See also
http://technet.microsoft.com/en-us/library/cc836528.aspx

If we want to see the policies in the logon/logoff category, we ran run

C:\Windows\system32>auditpol /get /category:Logon/Logoff
System audit policy
Category/Subcategory                      Setting
Logon/Logoff
  Logon                                   Success and Failure
  Logoff                                  Success
  Account Lockout                         Success
  IPsec Main Mode                         No Auditing
  IPsec Quick Mode                        No Auditing
  IPsec Extended Mode                     No Auditing
  Special Logon                           Success
  Other Logon/Logoff Events               No Auditing
  Network Policy Server                   Success and Failure

and if want to see only the logoff subcategory, we run

C:\Windows\system32>auditpol /get /subcategory:logoff
System audit policy
Category/Subcategory                      Setting
Logon/Logoff
  Logoff                                  Success

If we want to change the settings so that we record both successful and failed attempts to logoff, we run

C:\Windows\system32>auditpol /set /subcategory:logoff /failure:enable
The command was successfully executed.

A subsequent check of the policies will show the change:

C:\Windows\system32>auditpol /get /subcategory:logoff
System audit policy
Category/Subcategory                      Setting
Logon/Logoff
  Logoff                                  Success and Failure

Another note- if you want to modify these settings using Group Policy, then you need to do so on a Windows 2012 Server; the older Windows 2008 server’s group policy management editor does not have the settings. The recommended work around was to first write a custom script that uses the auditpol tool, then by using a netlogon share you can have computers that log onto the domain run that script, and so set the policies to your preferred values. See http://support.microsoft.com/kb/921469 for the details.

Suppose that you want to modify the audit policy on the different machines in your domain. You can’t use Group Policy, as the GUI does not support this new audit policy hierarchy.
12 server

Auditing File Access

Unlike the generic Linux logging system, Windows can also generate log entries when particular files are accessed / modified / changed. To illustrate the process, create a file- say on the Desktop named test.txt. Navigate test.txt (right-click) → Properties → Security → Advanced → Auditing → UAC

Windows 8 (Drakh)-2014-02-08-19-51-13

The window shows the user(s) and the privilege(s) that will be audited. In general you want to be broad here. If you specify that you will only audit file access from the user “Bill”, then if the user “Ted” accesses the file, then no log entry will be generated.

To illustrate, let us audit any attempt to access / modify / delete our file, from anyone. Select the Add button; for the object name, select “Everyone”. For the auditing entry, select Full Control both successful and failed. This means that an audit entry will be generated whenever any of the checked privliges are used by a user- either successfully or not. Apply your changes. Examine the security log- note that there are no entries about your file.

Open your file, modify it, and save the result. Once again examine the security log. Huh? Why are there no entries here- did we do something wrong? Although you set the auditing policy for that file, Windows does not actually pay attention to those settings unless you turn on file level auditing in your audit policy. Run the command

C:\Windows\system32>auditpol /set /subcategory:"file system" /success:enable 
/failure:enable
The command was successfully executed.

Open Your file. Open your security log. [You may need to refresh your view if you still have the log open from above.][Use the F5 key to refresh your view.] At this point, you should see Audit Success entries for the File System related to your audited file.

audit log

Time

Once you begin aggregating logs from multiple systems, it is essential that the time stamps from different systems match. If your systems are all on the same Windows domain, then Windows will provide a common time source for all systems. One catch though, is that the Linux systems may be in the wrong time zone; this is easily fixed.

If the systems are not all joined to the same domain, then you will need to synchronize them all to a common time server. On the Internet this is relatively straightforward, and only slightly more complex in a private network.

Complete details on how to set up a private NTP server, how to configure Linux and Windows clients to synchronize their time, and how to update the time zone for a Linux system are all described in detail in Etudes 04, Private NTP.

Splunk

Splunk is a useful tool to manage the logs for large numbers of systems. Though we have seen how to combine the logs from multiple Linux systems into one location, and how to view the logs of one Windows system on another, we are still left with the problem of extracting useful information from those logs. Splunk is one of the tools that will let us do exactly that.

On CentOS, choose the right rpm (32 bit or 64 bit) and then install as usual:

[root@gaim ~]# rpm -ivh /home/local/CORP/mcole/Desktop/splunk-6.0.1
-189883-linux-2.6-x86_64.rpm 
warning: /home/local/CORP/mcole/Desktop/splunk-6.0.1-189883-linux
-2.6-x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 653fb112: NOKEY
Preparing...                ########################################### [100%]
   1:splunk                 ########################################### [100%]
complete

On Mint or Ubuntu, choose the right deb and install as usual:

valen@hyach:~$ sudo dpkg -i Desktop/splunk-6.0.1-189883-linux-2.6-intel.deb 
Selecting previously unselected package splunk.
(Reading database ... 141581 files and directories currently installed.)
Unpacking splunk (from .../splunk-6.0.1-189883-linux-2.6-intel.deb) ...
Setting up splunk (6.0.1-189883) ...
complete

To start Splunk for the first time, run (as root, or via sudo)

[root@gaim ~]# /opt/splunk/bin/splunk start

It does take some time to start the first time, as it needs to generate certain cryptographic keys.

To configure Splunk to automatically start on boot, run

[root@gaim ~]# /opt/splunk/bin/splunk enable boot-start
Init script installed at /etc/init.d/splunk.
Init script is configured to run at boot.

To install Splunk on a Windows system, select the appropriate installer and launch. Because Splunk runs services, you will be prompted to choose the user that will be used to run those services. If you use a domain member, then your Splunk install can access information about other domain members through Windows methods like WMI. We will use local system users for our installation. Although we won’t be able to use WMI, this means that both Linux and Windows systems will need to use (essentially the same) Splunk forwarders to collect data. Also, previous classes using older versions of Splunk had no end of trouble getting WMI to work. For simplicity then, let’s stick with local users and forwarders.

Once Splunk starts, you use the tool by pointing your browser to your local host, on port 8000. The first time you do so, you will be presented with a screen like the following.

Screenshot-Login - Splunk - Mozilla Firefox

Be sure to change the password appropriately when prompted.

If you want to use a browser from other than the host, you need to open the firewall. The decision to open the firewall to allow connections to the Splunk server is a trade-off between security and convenience. Hey- I wonder if the data passing to and from the server is encrypted. What do you think?

Splunk launches with a default license valid for 60 days and up to 500 MB of data per day. It also comes with a free license that allows it to be used provided certain conditions are met. You must accept the license agreement the first time you run the tool. To change the license (after the system is running) from the trial license to the free license, first start Splunk, and visit the Splunk web page. Navigate through Settings to Licensing.

Screenshot-Licensing - Manager - Splunk - Mozilla Firefox

From the resulting page, select Change License Group, then choose the Free License. You will need to restart Splunk. One thing though- once you change to the Free License, you no longer need to authenticate to gain access to the Administrator page, so if you opened up your firewall to allow remote access to the Splunk web page, you may be in for a real treat. There are some solutions; Marc Wickenden and EyeIS both provide mitigation recommendations via an SSH tunnel.

The first thing we want to do is to enable Splunk to begin recording data. Simply select the Add Data button from the Splunk home page.
Screenshot-Home {% Splunk - Mozilla Firefox

On your a Linux system, the first data set we will include will be (local) syslog data. You will need to provide the name of the file that contains your data; remember that we already know how to store our syslog data in arbitrary files, so Splunk cannot auto-detect for us. On your CentOS system, you probably want to index /var/log/messages, while on the Ubuntu system you probably want /var/log/syslog. Once Splunk gets a hold of the syslog file though, it does a good job of detecting the source type. After checking that the previewed data seems appropriately read, you will be presented with a range of options for how Splunk will work with the data. The defaults for these options are reasonable, but you probably want to look them over before proceeding. Once you save the data source, you can add more data, or begin searching your logs for useful information.

This is the essential power of Splunk; you can think of it as Google for your logs. The ability to search through multiple logs from multiple hosts for particular words, users, or events will be incredibly valuable when it comes time to doing forensics. On the other hand, this does not happen instantaneously. It will take Splunk a fair amount of time to ingest the log data and index it for searching. Thus, don’t go running off to the search page to start testing out your analysis tools just yet!

Instead, go back to your Windows machines, and set them up to collect the Windows event logs from the system. Remember that as you decide what data to include in Splunk, more data is not always better, especially if it is not relevant. Thus, when choosing which Windows event logs to store in Splunk, as yourself if it is really necessary to index the Setup log for example.

Once indexing has had a chance to catch up though, you can then use the search function. Remember how we used logger to write a log entry? Try it again; then use the search function in Splunk, and you will see it plain as day:
Screenshot-Search {% Splunk - Mozilla Firefox

Splunk Apps

Splunk can be extended with a number of apps; these are available directly from Splunk. One valuable app for us is an app designed specifically to monitor the health and status of Unix and Linux hosts. If you are running your guest on a network with an Internet connection, you can find it and install it by navigating the Splunk Manager web page, Apps → Find more apps, then select Splunk for Unix and Linux.

In the classroom laboratory, select the App menu item from Splunk Manager, then Manage Apps. On that page, select Install App from File and provide it with the app. It is called splunk-app-for-unix-and-linux_501.tgz. Though these apps come as packaged .tgz files and do not need to be uncompressed.

The installation of a Splunk app will require a Splunk restart; this can be done from within the Splunk Manager.

Once Splunk is restarted, return to the Splunk Manager web page; from the Apps menu select the Splunk for Unix Add:on. There you can specify the Linux/Unix specific data that you want Splunk to record. When that is complete, you can start the Splunk App for Unix. Your first stop will be the Settings page; here you can accept the defaults. It will take a short while for the data to propagate through to Splunk, but eventually you will be able to get graphical representations of some common data.

Experiment with the data available on your server, both from the log data source you originally specified as well as from the *nix app. Here is a sample search through the logs looking for the string "sshd"; you can see the error we discussed when joining an Ubuntu system to a domain last week.
Screenshot from 2014-02-09 12^%25^%15

If that is not valuable, consider a search for "mcole su"; here we can see that the user "mcole" attempted to su to root, and that this user is not allowed.
Screenshot from 2014-02-09 12^%27^%59
The ability to mine this data, especially when aggregated across many systems, is why Splunk is valuable.

Not only is there an app for Unix/Linux systems, there is also an app for Windows systems. It is installed in the same fashion. Like the Linux/Unix app, the Windows app will also need to be configured when it is first run. Again, as is typical for Splunk, it will take some time to ingest the necessary data; for example on my Windows system I received a number of notifications saying that the maximum number of concurrent searches has been reached and that new searches would be queued.

Eventually though, the process will finish, and you will be able to search your Windows logs just as easily. Take a look at the Interesting fields section; on my system the "Account_Name" field for example is quite useful:
WinSplunk

Collecting Data from Multiple Machines

The ability to search the logs on one system is useful, but where Splunk will shine is when we collect all of the logs from all of our systems together on one or more log servers; then we’ll be able to search across our entire network for information.

To do so, we need to set up Splunk Forwarders (which send the data to the central server(s)) and Receivers (the central location(s) that record the results). I have had nothing but bad luck using Windows systems as receivers; after three hours I still could not figure out the problems. [And to those who are thinking, no, it was not the firewall :-)]. With this as background, let’s set up a Linux system as our receiver.

To set up the receiver, visit the Splunk Manager web page for that machine, select Settings, then Forwarding and receiving.
Screenshot from 2014-02-09 15^%07^%53

Choose to Receive data, and select Add New. The receiver can listen on any unused TCP port, though TCP/9997 is the canonical choice. Choose your port(s) and select Save. Be sure to open your firewall to allow the incoming connections.

On the forwarder, start by configuring the defaults; you probably want to save a copy of the indexed data. Then set up the forwarder. You can select either a host name or an IP address as the destination for the forwarded data.
SplunkForwarder

Test out the result, and notice that you can now see the logs from both machines from one location.

It will take some time for the data to be transferred to the receiver. You can expect the first bits of data to arrive within a few minutes, but a complete transfer of all of the data will take longer.

Remember- Splunk can forward to multiple machines, and receive from multiple machines- so you can construct a distributed / redundant Splunk network.

Other options- Snare

Snare for Windows is a tool that can be used to convert Windows log entries into syslog format and then send them to other hosts via either the syslog protocol or the Snare protocol.

Installation of Snare for Windows proceeds in the usual fashion. I recommend that you install Snare as a local system user, and recommend that you enable web access, at least for your localhost, to allow you to more easily configure the tool.

When the installation is complete, use your browser and visit localhost, TCP port 6161. If you specified a password, remember that the user name will be “snare”.

To configure Snare to forward its log messages to another host using the syslog protocol, navigate to Network configuration and enter the host name and the (UDP) port number to receive the logs. You also need to manually tell Snare to use syslog headers (instead of Snare native); you also choose the facility and priority for all of the syslog messages your system will dispatch.
Snare

We can also collect Snare and syslog messages on a host, this time using Snare Backlog. It too is installed in the usual fashion. Once running it listens on both the standard syslog (UDP 514) and the Snare native (UDP 6161) ports for messages. You can then configure either Linux machines (via syslog) or Windows machines (via syslog or Snare native) to send their results to this central server. Be sure you open the proper ports in the firewall!

Here you can see Snare backlog on a Windows 8 machine receiving messages from the host on which it is running, as well as syslog messages sent by a Linux machine.
Windows 8- Tethys-2013-02-16-17-15-05

  1. No comments yet.
  1. No trackbacks yet.

Leave a comment