Jump to content

Any Unix/Linux shell scripting gurus out there?


Casper
 Share

Recommended Posts

Okay, I'm writing a script to run as a cron job. The script will do a df -Pm, sort reverse order column 4, then awk to format the spacing. The results are written to a file, then emailed.

Here's what I've got for the df:

df -Pm | sort +4rn | awk '{printf "%-22s %-15s %-15s %-15s %-10s %s\n", $1, $2, $3, $4, $5, $6}' >> $LOGFILE

Problem is, when sent to our email client (Groupwise) the default font for viewing emails is not a fixed width font, so the columns are all fucked up. I'm using sendmail, so I can send html emails. Sweet. Ah, but another problem.

What I would need is something like this:

df -Pm | sort +4rn | awk '{sub(/[/, "
");printf "%-22s %-15s %-15s %-15s %-10s %s\n", $1, $2, $3, $4, $5, $6}' >> $LOGFILE

This would, in theory, put <br> at the end of every row. This is needed if I'm sending html emails. Otherwise, all of the rows continue on one line. However, <br> is not allowed because < and > are operators. How can I get around this?

Link to comment
Share on other sites

Just like you did with the [, it doesn't work if you surround the operator with some backslashes? Like "/<br>/"? That's usually the escape character to ignore operators right?

Edit: too late, by the time I came back and posted, you already got a solution!

I think you're right. I think using "/<br>/" would work. But it was a lot less code to use <pre> once instead.

For anybody interested, here's the finished working script:


#!/bin/ksh

## Set Environment
umask 022
LOGDIR=/log_file_directory
LOGFILE=$LOGDIR/disk_usage.html
export MAILTO='jdoe@email.com,jblow@email.com,test@email.com'
export CONTENT=$LOGFILE
export SUBJECT="Daily Disk Usage Report"
DATE=`date`

## Begin SCRIPT

# Validations

if [ ! -d $LOGDIR ]
then echo "$LOGDIR does not exist, exiting."
exit 1
fi

cd $LOGDIR

if [ -f $LOGFILE ]
then cp $LOGFILE ${LOGFILE}.prev
rm $LOGFILE
fi

touch $LOGFILE

if [ ! -f $LOGFILE ]
then
echo "Unable to create $LOGFILE, exiting."
exit 1
fi

chmod 666 $LOGFILE

# Main Script
echo "<html>" >> $LOGFILE
echo "<pre>" >> $LOGFILE

echo "Date executed: $DATE " >> $LOGFILE
echo " " >> $LOGFILE
echo "Daily Disk Usage Report" >> $LOGFILE
echo "*************************************" >> $LOGFILE
echo "Filesystem Total Used Available Capacity Mount" >> $LOGFILE

df -Pm | sort +4rn | awk '{printf "%-22s %-15s %-15s %-15s %-10s %s\n", $1, $2, $3, $4, $5, $6}' >> $LOGFILE

echo "</pre>" >> $LOGFILE
echo "</html>" >> $LOGFILE

## Email results
(
echo "Subject: $SUBJECT"
echo "MIME-Version: 1.0"
echo "Content-Type: text/html"
echo "Content-Disposition: inline"
cat $CONTENT
) | /usr/sbin/sendmail $MAILTO

It runs df -Pm to get the disk usage info, then sorts it so the highest % used is at the top, prints labels for the columns, formats the columns, then emails it. It checks to make sure the LOGDIR is there. If there is already a LOGFILE, it renames it to .prev and touches the new LOGFILE.

Link to comment
Share on other sites

The only suggestion I would give is to set it so that the log file is appended with the system date. Then for each time its run, instead of renaming to .prev just have it check for a file with the current system date and if that doesnt exist then create the file with the current system date. This would allow you to historically track utilization and graph it if you ever wanted not to mention that appending the date to the filename is always helpful.. Just a thought.

Link to comment
Share on other sites

The only suggestion I would give is to set it so that the log file is appended with the system date. Then for each time its run, instead of renaming to .prev just have it check for a file with the current system date and if that doesnt exist then create the file with the current system date. This would allow you to historically track utilization and graph it if you ever wanted not to mention that appending the date to the filename is always helpful.. Just a thought.

We don't want to keep copies. We use Nimbus for statistics tracking. This is just to send a daily report. We have a problem with disks filling up. This is the Oracle box, so archive logs have a tendency to get a bit out of hand when huge batch jobs are ran. This script is just to alert us of the disks running out of space. We have alerts set at 90% and 95%, but this script will give us more of a heads up. I have it running at 7a every day so it'll be in my email when I get to work. You're idea would be great if we weren't already using Nimbus.

Link to comment
Share on other sites

We don't want to keep copies. We use Nimbus for statistics tracking. This is just to send a daily report. We have a problem with disks filling up. This is the Oracle box, so archive logs have a tendency to get a bit out of hand when huge batch jobs are ran. This script is just to alert us of the disks running out of space. We have alerts set at 90% and 95%, but this script will give us more of a heads up. I have it running at 7a every day so it'll be in my email when I get to work. You're idea would be great if we weren't already using Nimbus.

Ah gotcha.. yeah oracle logs can be a big bitch.. I used to have the servers set up to page me (yeah, actual pager) back when I ran the labs at UD for various performance issues. I had the servers and the firewalls doing a round robin checks on each other just in case one actually crashed in which case I would get an page also. The shady part was that I also had cron jobs set to page me at certain times during my classes so if I was bored, I had an excuse to leave class without re-percussions. Since I worked for the engineering dept, they knew I had to go when the pager went off. :)

Link to comment
Share on other sites

We don't want to keep copies. We use Nimbus for statistics tracking. This is just to send a daily report. We have a problem with disks filling up. This is the Oracle box, so archive logs have a tendency to get a bit out of hand when huge batch jobs are ran. This script is just to alert us of the disks running out of space. We have alerts set at 90% and 95%, but this script will give us more of a heads up. I have it running at 7a every day so it'll be in my email when I get to work. You're idea would be great if we weren't already using Nimbus.

dude you should use nagios (http://www.nagios.org/) to monitor all of that. You can setup different services to check a million different things, then page/email you if something were to go down.

...but you would need to either install on that server or get a separate one to run it, it also has a few third party web interfaces that are great for configuring it if you dont want to keep editing files.

...its a great app!

Link to comment
Share on other sites

dude you should use nagios (http://www.nagios.org/) to monitor all of that. You can setup different services to check a million different things, then page/email you if something were to go down.

...but you would need to either install on that server or get a separate one to run it, it also has a few third party web interfaces that are great for configuring it if you dont want to keep editing files.

...its a great app!

Nimbus > Nagios

At least in my opinion.

Link to comment
Share on other sites

Nimbus > Nagios

At least in my opinion.

well i can't disagree, ive never used nimbus...im all about the free open source stuff that you can write your own plugins etc for.

are you able to that with nimbus? i have might have to give it a try

Link to comment
Share on other sites

We don't want to keep copies. We use Nimbus for statistics tracking. This is just to send a daily report. We have a problem with disks filling up. This is the Oracle box, so archive logs have a tendency to get a bit out of hand when huge batch jobs are ran. This script is just to alert us of the disks running out of space. We have alerts set at 90% and 95%, but this script will give us more of a heads up. I have it running at 7a every day so it'll be in my email when I get to work. You're idea would be great if we weren't already using Nimbus.

I write a script to flush my log files every like month or so.

Link to comment
Share on other sites

I like zabbix for monitoring .... had some experience with nagios and didn't think I could do enough with it, with what little time I had to get something going on all the desperate OSes in the environment...

good to see there are others that live in the CLI .... :D

Link to comment
Share on other sites

you tech guys make my freakin head hurt.... I'll stick to swinging a hammer and building stuff that I can see.... but I'm glad you guys are around to figure this complex shit out....

Yeah, the Geeks come out... and here I thought I was nearly all alone. Guess again.

Link to comment
Share on other sites

well i can't disagree, ive never used nimbus...im all about the free open source stuff that you can write your own plugins etc for.

are you able to that with nimbus? i have might have to give it a try

Nimbus is EXPENSIVE. I think we paid around $75k for our setup. We monitor 100+ Windows servers, 5 AIX servers, and a hand full of Linux servers, plus the Oracle and SQL servers.

Link to comment
Share on other sites

Nimbus is EXPENSIVE. I think we paid around $75k for our setup. We monitor 100+ Windows servers, 5 AIX servers, and a hand full of Linux servers, plus the Oracle and SQL servers.

daaaamn!! :eek:

yeah i think i will stick with nagios.....

we monitor about 100 different hosts including servers, routers, etc...and about 300 services (disks, pings, memory, etc) it works great for what we need.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...