Monday, April 21, 2008

Introduction to Intrusion Detection (BASICS)

There is something even worse than having your computer hacked and abused for some malicious purpose: Getting hacked and never finding out about it. There is no absolute protection against an illegal intrusion. Someone might figure out a new vulnerability and exploit it on your servers before it is publicly known that this vulnerability exists. This could happen even if you have applied all current security patches. If the vulnerability is not generally known, there is no patch to install
against it. But in general, an intruder leaves some traces behind. Making it hard for the intruder to obscure his trail and interpreting any traces you find is what intrusion detection is about.



The main Objectives of this Blog includes :-

1. Log Files and Their Evaluation.
2. Host-Based Intrusion Detection.
3. Network-Based Intrusion Detection.
4. Incident Response.

Log Files and Their Evaluation

Log files are a valuable tool for detecting intrusions. However, oncesomeone has taken over a computer and become root, she can manipulate the log files to obscure any traces of her activity. Manipulating log files can consist of deleting entries or trying to hide the relevant information by filling the log files with huge amounts of irrelevant entries. One way to prevent the manipulation of log files by deletion of entries is to not only log on the host itself, but to send the log information across the network to another host.
The intruder could stop the sending of logging information to the log host, but the lack of entries would alert you, as would any suspicious ones. Depending on the security needs, the log host can even write the logs onto a worm (write once, read many) medium, making it impossible to change them even if the log host is compromised. Another approach is to have a log host without any network connection. Log messages in this case would be transferred using a serial connection. This setup would make it impossible for an intruder to access the log host via the network.

But even if the incident is logged, you still have to become aware of the relevant log entries. There are tools that help to filter relevant log entries and bring them to your attention. This objective explains how to :-
■ Log to a Remote Host
■ Evaluate Log Files and Run Checks

Log to a Remote Host

The syslog-ng used to log system messages on SUSE Linux Enterprise Server 10 can send messages to and receive messages from other computers. The receiving computer is sometimes
referred to as loghost. Configuration changes are necessary on both sides:-
■ Client Side Configuration of Syslog-ng
■ Server Side Configuration of Syslog-ng


Client Side Configuration of Syslog-ng Logging to other computers is configured in
/etc/syslog-ng/syslog-ng.conf.in, as in the following example:

destination tologhost { udp(10.0.0.254 port(514)); };
log { source(src); destination(tologhost); };

The first line defines a destination tologhost, with protocol, IP address (you could also use the host name), and port. The second line configures logging to this destination. Because no filter is included, all messages from the source src get sent to the destination tologhost. After editing the file /etc/syslog-ng/syslog-ng.conf.in, run SuSEconfig --module syslog-ng to transfer the changes to /etc/syslog-ng/syslog-ng.conf, which is the actual configuration file used by syslog-ng.

Server Side Configuration of Syslog-ng

To receive messages, syslog-ng has to listen on a port, usually port 514. Unlike syslogd, which only supports UDP connections, syslog-ng also supports TCP .

To bind to a port, you have to add a line within a source section.You can either add it to the existing source src section, or create a new one, as in the following:

source network {
udp(ip("10.0.0.254") port(514));
};

Using 0.0.0.0 as the IP address causes syslog-ng to bind to all
available interfaces.
Then a destination and a log entry are required:

destination digiair { file("/var/log/$HOST"); };
log { source(network); destination(digiair); };

The first line defines the destination digiair; each host’s log entries get written to a file with the hostname as file name. The log entry directs messages from the source network to the destination digiair.

Evaluate Log Files and Run Checks

Depending on the activity on the computer, log files can grow by several MB each day. The problem you are faced with is how to find any suspicious entries within this huge amount of data. Without specific tools, it becomes very difficult to find out about unusual activities or security violations. There are several tools that help to extract relevant entries from log files and other sources. There are tools to

■ Extract Information from Log Files
■ Run Security Checks

You can also write scripts of your own that alert you when certain entries appear in a log file.

Extract Information from Log Files

The following introduces two tools used to extract relevant information from log files:

■ logcheck
■ logsurfer

logcheck :-

logcheck parses system logs and generates email reports based on anomalies. In the beginning, logcheck produces long reports. You have to modify the configuration of logcheck so that entries that are harmless do not turn up in the report anymore. After that initial phase, the reports mailed by logcheck should contain only relevant information on security violations and unusual activities. logcheck should be called regularly via cron to parse log files. Because logcheck remembers the point up to which a log file was scanned previously, only the new section is scanned on the next call.

Logcheck is not included with SUSE Linux Enterprise Server 10. You can
get it at http://logcheck.org/.

logsurfer

The disadvantage of parsing log files line by line is that each line is independent of all other lines. However, you might want additional information when a certain entry is found in the log file. logsurfer offers this functionality with contexts: Several lines matching a pattern can be stored in memory as a context. Depending on further patterns in the log file, such a context could be mailed to you for your inspection, or some other action, such as starting a program, could be triggered. The context could be deleted again if this pattern is not found within a certain time or number of lines. It is also possible to dynamically create or delete matching rules,
depending on entries in the log file.Because logsurfer can be configured in great detail, the configuration is not trivial. However, the advantage is that you can configure precisely what should happen under what circumstances .

Logsurfer is not included with SUSE Linux Enterprise Server 10. It can be found under http://www.cert.dfn.de/eng/logsurf/.

Run Security Checks

In addition to checking configuration files, you can run other checks to find out about system configurations that could constitute a security danger. Automated checks usually consist of scripts that are called in regular intervals and inform you, for instance, by email, of what was found. While such scripts can also detect a system compromise under certain circumstances, you should be aware that a successful intruder can modify such scripts to avoid detection.

The following are described:
■ seccheck
■ Custom Scripts
■ Monitor Login Activity from the Command Line

seccheck

The package seccheck offers a series of scripts that check the
computer regularly (daily, weekly, and monthly) and send reports to
you. Items checked by the scripts are

■ Kernel modules loaded
■ SUID files
■ SGID files
■ Bound sockets
■ Users with accounts who never logged in
■ Weak passwords
An email is sent to root detailing what was found.

Custom Scripts
There is no limit to what you can check or have sent to you using
shell or Perl scripts.
Some possibilities are
■ Output of last
■ Output of df
■ Output of netstat
■ Output of ps
■ /etc/passwd to check if an account other than root has UID 0.

Monitor Login Activity from the Command Line

Monitoring tasks include evaluating login activity for signs of a security breach, such as multiple failed logins.To monitor login activity, you can use the following commands:

■ who. This command shows who is currently logged in to the
system and information such as the time of the last login.
You can use options such as -H (display column headings), -r
(current runlevel), and -a (display information provided by most
options).

For example, entering who -H returns information similar to the
following:

da10: ~ # who -H
NAME LINE TIME Command
root 0 Aug 23 05:41 (console)
geeko pts/2 Aug 24 02:32 (10.0.0.50)
da10:~ #


■ w. This command displays information about the users currently on the computer and the processes they are running. The first line includes information such as the current time, how
long the system has been running, how many users are currentlylogged on, and the system load averages for the past 1, 5, and 15 minutes.

Below the first line is an entry for each user that displays the login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the command line of the user’s current
process. The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs but does include currently running background jobs.

The PCPU time, specified in the What field, is the time used bythe current process.

You can use options such as -h (don’t display the header), -s (don’t display the login time, JCPU, and PCPU), and -V (display version information). For example, entering w returns information similar to the following:

da10: ~ # w
USER TTY LOGIN@ IDLE JCPU PCPU WHAT
root 0 Mon05 ?xdm? 1:48 0.02s -0
geeko pts/2 02:32 0.00s 0.10s 0.02s ssh: geeko [priv]
da10:~ #

You could also use w within scripts.

■ finger. This command displays information about local and
remote system users. By default, the following information is
displayed about each user currently logged in to the local host:
❑ Login name
❑ User's full name
❑ Associated terminal name
❑ Idle time
❑ Login time and location
You can use options such as -l (long format) and -s (short
format).
For example, entering finger -s returns information similar to
the following:

da10: ~ # finger -s
Login Name Tty Idle Login Time Where
geeko The SUSE Chameleon pts/2 - Tue 02:32 10.0.0.50
root root 0 54d Mon 05:41 console
da10:~ #

■ last. This command displays a list of the last logged-in users. Last searches back through the file /var/log/wtmp (or the file designated by the option -f) and displays a list of all users logged in (and out) since the file was created.
You can specify names of users and tty's to only show information for those entries. You can use options such as -num (where num is the number of lines to display), -a (display the hostname in the last column),and -x (display system shutdown entries and runlevel changes).

For example, entering last -ax returns information similar to the following:

da10: ~ # last -ax
root pts/0 Thu May 4 12:00 still logged in
da10.digitalairlines.com
runlevel (to lvl 5) Thu May 4 11:45 - 12:03 (00:17) 2.6.16.11-7-smp
reboot system boot Thu May 4 11:45 (00:17) 2.6.16.11-7-smp
shutdown system down Thu May 4 10:26 - 12:03 (01:37) 2.6.16.11-7-smp
runlevel (to lvl 0) Thu May 4 10:26 - 10:26 (00:00) 2.6.16.11-7-smp
...
wtmp begins Tue May 2 12:20:52 2006
da10:~ #

■ lastlog. This command formats and prints the contents of the last login log file (/var/log/lastlog). The login name, port, and last login time are displayed. Entering the command without options displays the entries sorted by numerical ID.
You can use options such as -u login_name (display information for designated user only) and -h (display a one-line help message).If a user has never logged, in the message **Never logged in**
is displayed instead of the port and time.

For example, entering lastlog returns information similar to the following:

da10:~ # lastlog
Username Port Latest
at **Never logged in**
bin **Never logged in**
...
ntp **Never logged in**
postfix **Never logged in**
root pts/0 Thu May 4 12:00:45 +0200 2006
sshd **Never logged in**
suse-ncc **Never logged in**
uucp **Never logged in**
wwwrun **Never logged in**
geeko :0 Wed May 3 09:14:55 +0200 2006
squid **Never logged in**
da10:~ #

■ faillog. This command formats and displays the contents of the failure log (/var/log/faillog) and maintains failure counts and limits. You can use options such as -u login_name (display
information for designated user only) and -p (display in UID order).
The command faillog only lists users with no successful login since the last failure. To list a user who has had a successful login since his last failure, you must explicitly name the user
with the -u option.

Entering faillog returns information similar to the following:

da10:~ # faillog
Login Failures Maximum Latest On
geeko 1 3 03/03/06 13:33:25 +0100 /dev/tty2
da10:~ #

The faillog functionality has to be enabled by adding the module pam_tally.so to the respective file in /etc/pam.d/, for instance /etc/pam.d/login:

#%PAM-1.0
auth required pam_securetty.so
auth required pam_tally.so no_magic_root per_user
auth include common-auth
auth required pam_nologin.so
account required pam_tally.so no_magic_root

The rest of the file does not need to be changed.

Reviewing files such as /var/log/messages also gives you information about
login activity.

Will Be Continued =

DOUBTS and COMMENTS are WELCOME :)

1 comment:

ausl said...

Hi Vijay Kumar Velu,

I am great fan of yours. Have bought and read both of your books "Mobile Application PT" and "Master Kali Linus for Advanced PT". Now I am a Security Analyst which helps SOC to monitor and respond.

I truly believe all sorts of protection eventually would fail.

This blog essay has inspired me on Linux monitoring of potential compromised sign. However, most of my machines now are Windows servers.

How could we implement control to monitor potential compromised sign for Windows server?

Best Regards,
ausldavid