Friday, May 29, 2009

Conky is Cool

Conky is a tool used to display performance metrics of a Linux box in an active window or right on the background of the computer. If you’ve used Sysinternal’s BGInfo it’s kinda like that, but better because it updates in real time. It’s  very configurable, but usually looks something like this:

image

That’s a bit hard to read so here is an example of the kind of data it can provide:

image

The configuration for this is pretty easy on Centos.  Here’s how I did it:

yum install libX11-devel libXext-devel libXdamage-devel libXft-devel glib2-devel

* watch the wrap…that’s all 1 line

Then download conky here.  Expand this somewhere like /usr/bin or ~ and run:

  • ./configure
  • make
  • make install

Now you need to make a file named .conkyrc and drop it in your home directory.  I used one I found online, but there are a ton of them available here with screenshots showing with they look like.

To start it under Gnome open a command line and type conky.  If that bothers you that it’s spawned from the window add this script to your machine (/usr/bin/startconky.sh and chmod it to 755) and put a link to it in your panel or on your desktop.

#!/bin/sh
# by: ??
# click to start, click to stop

if /sbin/pidof conky | grep [0-9] > /dev/null
then
exec killall conky
else
sleep 1
conky
exit
fi


*Note…script blatantly stolen from here.


Now when you click on the conky icon in your panel it will start and stop conky.


Pretty cool stuff.



image

Friday, May 22, 2009

How I Backup MySQL

I’m documenting this mostly for my benefit, but I figured it may be of use to others. This is how I backup my mysql servers.

  1. Create a backup directory. I use /backup
  2. Verify mysqldump exists under /usr/bin
  3. Verify user account with rights in MYSQL to the database you want to backup. In this example I will backup a data base called bluedb.  For this demo I’ll use the username “Joe” and the password “Shmoe”
  4. Create a script directory under /backup
  5. Create the backup script. Here’s mine (largely written by my friend Jason):

  6. #:

    BACKUP_DIR=/backup
    DUMP=/usr/bin/mysqldump
    DATE=`date +%Y%m%d`

    # DATABASE INFO
    DB=bluedb

    # First we will backup the structure of the database
    ${DUMP} --user=Joe --password=Shmoe --no-data ${DB} > ${BACKUP_DIR}/${DATE}_${DB}_backup_structure.sql

    # Now we will backup the database itself
    ${DUMP} --user=Joe --password=Shmoe --add-drop-table ${DB} > ${BACKUP_DIR}/${DATE}_${DB}_backup.sql

    # Now we will remove files that are older than 3 days
    find ${BACKUP_DIR} -type f -mtime +3 -exec rm {} \;


    *Note: There is some serious line wrapping going on above.

  7. Name the script something like mysqlbackup.sh and drop it in /backup/script
  8. Change rights on the mysqlbackup.sh so it has rights to execute (chmod 755 mysqlbackup.sh)
  9. Now you can test it by running:  /backup/script/mysqlbackup.sh
  10. Next I generally edit my cron jobs (crontab –e) and add the line   0 0 * * * /backup/script/mysqlbackup.sh

That’s it . The script dumps both the data and the structure of the database and it keeps 3 days worth of backups.  From here my main backup script for the box picks these up during it’s normal daily file level backup.

Enjoy the backup goodness!!!

 

image

Wednesday, May 20, 2009

Data Domain is the Bee’s Knees

Last year when I was budgeting for 2009 I decided to take a leap and move our online backups from regular disk storage (a SATABeast) over to a compressed and deduped device.  After looking at a few options (namely Exagrid and Data Domain) I chose Data Domain.  Basically because the landing zone concept of Exagrid gets under my skin.  So far I’ve been very impressed with the Data Domain system.  It’s behaving better then I had expected.  Here’s a graph of one of our sites and how well it is working:
image

So here we see that the raw backup (the red line) is about 15.6TB.  After compression (the blue line) the data takes up about 10.6TB.  After deduplication (the green line) it’s only 2.8TB.  As I’m replicating this across the country it sure makes life better.  So yes, I love my Data Domain system.

Tuesday, May 19, 2009

Doing more with less

If you are like me, a Windows convert, you are probably popping open nano, emacs or some other command line editor a lot as you are looking through configuration files.  Yeah I can use vi when I have to but I’m happier and more productive in a full screen text editor.  Anyhoo, I’ve used the “more” command in Linux and Windows for quite a long time and I like it for quick checks, but now that I’ve been playing with Linux a lot I’ve been seen how much more powerful the “less” command is.

To see the contents of a file just type “less filename.txt”.  It automatically displays the file on the screen and magically the page up, page down, home and end keys all work for navigation.  Even better, you can run a simple search by proceeding the search term with a forward slash.  For example, to search a file for the word “apple” while in a less session just type “/apple”. Cool.  You even get the benefit of text highlighting to make the search more effective. (If you hate the highlighting just hit Esc-u to turn it off.) Once you  are in a search, the letter n takes you to the next found word and a capital N takes you to the previous found word.  Pretty cool.  To exit a less session just type q for quit.  Of course the beauty of all this is that you aren’t actually editing the file so you can do no harm.

To see what line you are on or how far you are reading into a file start the program with the –M switch (less –M filename.txt)  Here are some other cool tricks (from a shameless cut and paste):

Quit at end-of-file
To make less automatically quit as soon as it reaches the end of the file (so you don't have to hit "q"), set the -E option.
Verbose prompt
To see a more verbose prompt, set the -m or -M option. You can also design your own prompt; see the man page for details.
Clear the whole screen
To make less clear and repaint the screen rather than scrolling when you move to a new page of text, set the -C option.
Case-less searches
To treat upper-case and lower-case letters the same in searches, set the -I option.
Start at a specific place in the file
To start at a specific line number, say line 150, use "less +150 filename". To start where a specific pattern first appears, use "less +/pattern filename". To start at the end of the file, use "less +G filename".
Scan all instances of a pattern in a set of files
To search multiple files, use "/*pattern" instead of just "/pattern". To do this from the command line, use "less '+/*pattern' ...". Note that you may need to quote the "+/*pattern" argument to prevent your shell from interpreting the "*".
Watch a growing file
Use the F command to go to the end of the file and keep displaying more text as the file grows. You can do this from the command line by using "less +F ...".
Change keys
The lesskey program lets you change the meaning of any key or sequence of keys. See the lesskey man page for details.
Save your favorite options
If you want certain options to be in effect whenever you run less, without needing to type them in every time, just set your "LESS" environment variable to the options you want. (If you don't know how to set an environment variable, consult the documentation for your system or your shell.) For example, if your LESS environment variable is set to "-IE", every time you run less it will do case-less searches and quit at end-of-file.

Happy viewing!!!

Friday, May 15, 2009

Diving into Wiki’s

A recent project popped up at the Firm in regard to the storage and retrieval of unstructured, uncategorized data.  The request was to have a piece of software help to keep track of websites, contact information, client history and government programs related to a specific area of law.  After looking a numerous products including mind mapping tools, data tree tools, $harePoint and databases I picked a wiki as the best method to use to get started.  After reading and reviewing what was out there, I decided to try both phpwiki and MediaWiki.   image After a failed attempt at getting phpwiki going I tried MediaWiki and had things up in minutes.  No surprise here…there’s just a ton more information available about MediaWiki so I  was able to get it going faster. I probably could have gotten phpwiki going given enough time, but MediaWiki was good enough. 

The entire MediaWiki install took roughly 15 minutes. I won’t blog about the process because it’s already in plain English here.  What I really liked about MediaWiki was (1) the fact that it’s what Wikipedia is hosted on  (2) there are a ton of already developed plug-ins (3) the documentation is really good and abundant and (4) any idiot can edit it as proven by Wikipedia (shout out to Jerry on phrasing that point so eloquently!).

So far so good.  The basic product is pretty simple to use although the Wiki editing requires some inline html/Wordstar-like editing skills.  I’ll let you know how it goes but so far it’s been pretty cool.

Tuesday, May 12, 2009

OSSEC Active Responses

So I’ve been playing around a lot with OSSEC.  Active responses are one of my favorite features.  They remind me of the old firewall days when countermeasures image existed whereby the firewall would detect an ongoing attack and fight back flooding the source IP with syn attacks or malformed packets.  Yeah…sneaky goodness….but in the world of botnets and script kiddies those kinds of things are no longer effective.  However, one use of active responses is to let you “tune” your firewall to situations at hand.  It seems a little easier to work in local mode as the active responses are already set up, but it is possible to get them to work in the agent-server configuration as well.

Here is how they work…

The ossec.conf file lists both commands and active-responses. I’m going to describe how the firewall-drop active-response detects and drops attempts to login too many times to your server.  The command configuration is shown here:

<command>
    <name>firewall-drop</name>
    <executable>firewall-drop.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>yes</timeout_allowed>
  </command>

Now this is stock code included with OSSEC so you can get this working in minutes.  Notice that the file structure is XML.  That not only makes it easy to find and understand the code but it makes it simple to extend with any text editor as well. 

Here we see the declaration of a command firewall-drop.  It’s tied to a shell script called firewall-drop.sh (which lives at /var/ossec/active-response/bin on my server).  This script and pretty much all scripts return (expect) a number of variables for you to use.  The first is the action (add or delete) which is used to tell the script to add a firewall rule or delete a firewall rule.  The second variable is the username (user) which isn’t actually used in this specific script. The third variable is the source ip address (srcip) which in our case is the ip of the guy trying to login unsuccessfully.   The timeout_allowed goes hand in hand with the action variable.  A timeout lets you put a rule in for say…600 seconds and then remove it again.  This will thwart attackers automated attacks while not completely locking out the forgetful or fat-fingered admin. 

The active response configuration in the ossec.conf file looks like this:

<active-response>
  <!-- Firewall Drop response. Block the IP for
     - 600 seconds on the firewall (iptables,
     - ipfilter, etc).
    -->
  <command>firewall-drop</command>
  <location>local</location>
  <level>6</level>
  <timeout>600</timeout>
</active-response>

The first part in between the <!—and –!> is just a descriptive comment.  The command matches up with the command defined above.  The location tells the script where to run the rule.  In this case, this is a local installation of OSSEC so it’s set to local.  The level is the trigger which kicks off this event.  There are a set of predefined rules (in /var/ossec/rules) which categorize different detected events.  One of these triggers is the detection of a brute force ssh password attack.  There are numerous levels so I can’t go into all them here but you can easily drop in a look at the rules files on a standard installation for more info.  Additionally, you can trigger on multiple types of things not just levels.  There is a predefined rules group name called “sshd” so I could have used something like that as well.  Unfortunately, the sshd rules group fires for successful and unsuccessful attempts so I couldn’t use that in this example.

Once things are configured a simple restart of the ossec system (/var/ossec/bin/ossec-control restart) will put it all into action.

So here is how it works.  Someone tries to brute force attack the server on ssh and logs are generated in the secure log.  OSSEC watches those logs, uses the rules file to detect the attack and sets the rule level.  The the rule level fires off the active response script firewall-drop.sh which adds a rule to the iptables config effectively blocking the source ip address.  After 600 seconds, the script is called again with the “delete” action and the rule is removed and all is well.

OSSEC is very powerful and this example just scrapes the surface. This has rapidly become one of my favorite open source tools of all time.

Thursday, May 07, 2009

Building a BDD/Microsoft Deployment USB Boot drive from XP

In my earlier post about using Microsoft Deployment off a USB drive I gave instructions for using the Diskpart tool to format, partition and set the USB drive as active.  Turns out that only works under Vista.  Under XP, when you do a “list disk” in diskpart it can’t even see the drive.  So…as a work around here is what we did.  First off I’m using a SAN cruzer 8GB stick but they are all about the same.  First I uninstall the U3 autoloading nonsense.  This will end up reformatting the drive which is just fine. The I use a copy of bootsect.exe from the WAIK kit (installed under c:\Program Files\Windows AIK\Tools\PETools\x86) and I run:

bootsect /nt60 f:

Then I copy the contents of the .iso created from the Microsoft Deployment point (the media one) onto the drive.  From here it boots and all is well.

Tuesday, May 05, 2009

SSH Protection in IPTables

So now that we are off of Windoze IIS as our main production web server and onto Linux, I’ve been watching the logs very closely to verify all is well.  I’ve got OSSEC, Logwatch  and a few “custom” scripts installed for this purpose.   imageOne thing I noticed was the daily SSH brute force/dictionary attacks.  I’m only password protecting the service because I’ve still got a few developers working remotely and dealing with them and certificates is a little more pain then I want right now.  So, to slow down the attacks I’ve added two lines to my iptables config to keep attempted logins down to 3 per minute.    Here are the two lines (watch the wrap):

iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent   --set

iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent   --update --seconds 60 --hitcount 4 -j DROP

Essentially, a timer is kicked off at each attempt to login.  On the forth attempt, the IP is banned until the first attempt times out at over 60 seconds. If someone runs a script against the server the first 3 will be denied because of incorrect passwords and the rest will be banned because of repeated attempts.  Since the timer resets with each attempt they can keep sending user/pass combos but until they back off for 60 seconds they will just be denied.  Not a perfect solution, but one to certainly stop the madness I’m currently seeing.  After we get out of developer mode I’ll probably increase it to 10 minutes.

I also looked at two scripts fail2ban and one called protect-ssh. fail2ban_logo Both looked like they worked ok, but they were a little more difficult then the two lines above which did pretty much what I wanted. When I get some more time I’ll probably look into both of them again.

Enjoy.

Friday, May 01, 2009

GMER Rootkit detection tool

I ran across this cool little free tool for Windows based root kit detection today.  There are actually two tools on the site, catchme.exe and gmer.exe.  Catchme.exe seems to be a command line tool for root kit detection and gmer.exe (whose name changes on each download to thwart malware from detecting it on the way down) is a gui app.  As I’ve never had a root kit infection I can’t comment on how well they work but they look like pretty good tools and they are recommend in the book on OSSEC so that can’t be all that bad. :) 

Enjoy!