Thursday, May 15, 2008

New laptop

I'm getting a new laptop at work on Monday. Started a list of the things to do right after I get it. In the order they hit the electronic page (but not the order they'll happen -- more on that in a sec):
  • Create utils and docs folders
  • docs should be a junction of c:\documents and settings\tss04\my documents (powershell New-Junction c:\docs 'C:\Documents and Settings\tss04\My Documents'
  • bring over My documents and docs folder contents (currently separate directories -- fixing that w/ the junction)
  • bring over oreilly folder (put in docs?)
  • install ant
  • install perl
  • install filezilla
  • install firefox
  • install powershell and power shell community extensions
  • install vim, configure _vimrc, vimrc_example.vim, mswin.vim, john.vim, filetype.vim
  • bring over utils
  • selectively bring over d:\ftp
  • bring over d:\powershell
  • bring over notes journal, archive
So the obvious out of order one is that I'm using PowerShell to configure my docs directory as a junction of My Documents before I've installed PowerShell... Reading the list, I know I've missed cygwin and all of my firefox add-ons, bookmarks, profile, etc. Oh, and VMWare -- that's going on this time. Maybe 7-zip. Maybe...

What's the first thing you install on a new machine? What are you going to leave off when you replace your current box?

Monday, May 5, 2008

Powershell template

I'm working on a template for all powershell scripts in our environment. Basically something everyone can take as a starting point for their scripts, that will guarantee a level of homogeneity of functions (logging, debug level messages, email output, etc.).

Below is what I have so far. Any thoughts on improvements/additions?


##########################################################################
## Script Name : template.ps1 ##
## Created : 05/02/2008 ##
## Author : John McDevitt ##
## Function : sample script to be used as a start for all scripts ##
## : in production. ##
## : ##
## Usage : how to call this script (e.g. arguments required or ##
## : accepted) ##
## : ##
## Host/path : where is this script located ##
## : ##
## Notes : Update the help message with meaningful text for ##
## : your script. ##
## : ##
## : include debugging messages by calling debug_msg ##
## : e.g., debug_msg("about to do something weird") ##
## : include log messages by calling log_msg. Log file ##
## : is configurable, but defaults to a scriptname.log ##
## : in the current directory (will be a share soon). ##
## : All debug messages go into the log prefaced with ##
## : DEBUG: ##
## : ##
## : you will probably need to update the param block, ##
## : even though it is above the "your code here" block ##
## : ##
## Update Log : ##
## : ##
##########################################################################
param (
[switch]$debug,
[string]$mailto,
[string]$logfile
)

function Usage
{
""
"Describe the purpose of this script"
""
"Usage: template.ps1 -option <value> "
""
"Required Parameters:"
" -option <value>: Describe the options and their expected values here"
""
"Optional Parameters:"
" -mailto user@domain: User/group to send a copy of any debugging messages"
" or log info to."
""
" -debug: Enables debug messages -- useful for tracing code execution"
""
" -? : Display this usage information"
""
""
exit
}

function log_msg ($message) {
$message >> $logfile
if ($mailto) { $script:email_body = $script:email_body + $message + "`n" }
}

function debug_msg ($message) {
if ($debug) {
$message = "DEBUG: " + $message
$message
log_msg $message
}
}

log_msg ("started execution at " + (get-date))

if ($logfile -eq "") {
$logfile = $($MyInvocation.mycommand.name) + ".log"
}

debug_msg "logfile is $logfile"

##########################################################################
## YOUR CODE BEGINS BELOW THIS LINE ##
##########################################################################

if (( $ARGS -eq '-?') -or ( $ARGS -match "help" )) {
Usage
}

##########################################################################
## YOUR CODE ENDS ABOVE THIS LINE ##
##########################################################################

log_msg ("completed execution at " + (get-date))
if ($mailto) {
send-smtpmail -to $mailto -smtphost mailhost.yourdomain.com -from $mailto -subject $($MyInvocation.mycommand.name) -body $email_body
}

Thursday, May 1, 2008

Reach out and touch someone

A follow on to my previous post. It's great to be able to find a specific copy of a process running on a remote machine (UserA's copy of Notepad, for instance), but the real benefit is being able to do something about it.

Below is kill_processes.ps1. You'll note that it's nowhere near as clean as my pkill for windows script. It turns out that to do much of anything on a remote machine with PowerShell, you need to go through WMI. Basically, this script is a more flexible version of the kill scripts. This one takes four parameters:
  • computername -- defaults to localhost

  • notuser -- this is a pattern for users to ignore. defaults to SERVICE or SYSTEM

  • user -- this is a pattern for users to find. It is required, and throws an exception if not provided

  • name -- this is a pattern for the process name to find and kill. It is required and throws an exception if not found.

Having both the notuser and user parameters is pretty redundant, but it's protection against a bad pattern for the user parm. I didn't want to put this script in the wild and have someone run it like .\kill_processes.ps1 -computername domaincontroller -user "ser" -name "*" when trying to kill all of sergey's processes -- bye-bye domain controller is NOT my goal.

param (
[string]$computername = "localhost",
[string]$notuser = "SERVICE|SYSTEM",
[string]$user = $(throw "enter the user name that started the process"),
[string]$name = $(throw "enter the process name to kill")
)
gwmi win32_process -computername $computername|
where {($_.getowner().User -notmatch $notuser) -and ($_.getowner().user -match $user) -and ($_.name -match $name)} |
foreach {$_.Terminate() >$null }

Tuesday, April 29, 2008

Remote process list

I have frequently looked for a good way to be able to tell someone what processes where running on a remote (windows) machine. The pslist command from sysinternals got close, but it was hard (impossible?) to show who was running a given process, or to ignore (or see) only a given user's processes.

For example, if you're interested in userA's perl command on machine1, pslist will tell you this:

C:\utils\pstools>pslist.exe \\machine1 perl

PsList 1.26 - Process Information Lister
Copyright (C) 1999-2004 Mark Russinovich
Sysinternals - www.sysinternals.com

Process information for machine1:

Name Pid Pri Thd Hnd Priv CPU Time Elapsed Time
perl 4444 8 1 28 928 0:00:00.015 3:18:10.509
perl 5680 8 1 29 928 0:00:00.046 2:18:14.806
perl 5112 8 1 30 928 0:00:00.015 0:18:21.197


So which one is userA's? And who do the others belong to? Re-enter powershell. Below is the start of a script for hunting down this info. It defaults to looking at the current machine and filtering out system processes and services, but can be run against any machine and ignore any given user. I'll probably alter it to pay attention to a user instead of ignoring a user or users before I'm done.

param ( [string]$computername = "localhost", [string]$ignore = "SERVICE|SYSTEM" )
gwmi win32_process -computername $computername|
where {$_.getowner().User -notmatch $ignore} |
foreach {write-host ($_.getowner().User),$_.processid,$_.name,$_.commandline}


Running it against machine1 gives this:

[utils:\testing\powershell]> utils:\testing\powershell\processes.ps1 -computername machine1
0 System Idle Process
userA 5128 cmd.exe cmd /c C:\esp.bat "\\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\shareb\file1.txt 1020M"
userA 4444 perl.exe \\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\shareb\file2.txt 1020M
userA 5208 cmd.exe cmd /c C:\ESP.BAT "\\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\shareb\file3.txt 360m"
userA 5680 perl.exe \\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\shareb\file4.txt 360m
userA 6040 cmd.exe cmd /c C:\esp.bat "\\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\shareb\file5.txt 240m"
userA 5112 perl.exe \\server1\sharea\bin\perl.exe \\server1\sharea\file_watch.pl \\server2\sharea\file6.txt 240m


Pretty interesting, given that it shows 6 copies of perl where pslist only shows three. The difference is that pslist was good about only getting the perl commands, but ignored the .bat files that turned around and called perl.

Friday, April 25, 2008

pkill for windows

One of my absolute favorite tools on solaris is pkill. Without it, when you want to send a signal to a process (most likely to kill it) you need to figure out the process ID and then use that:

kill -HUP 1234

If you want to kill all of a type of process (e.g. all the sshd's), you end up doing something like this:

for i in `/usr/bin/ps -o pid,args -ef |grep sshd |awk '{ print $1 }'`; do kill $i;done

With pkill this becomes:

pkill sshd

Understand why I like it? Well, I just learned the powershell equivalent, and am PSYCHED.

Stop-Process (Get-Process winword).id

Bye-Bye Word.

"Powerful" end of lines

A follow up to this post.

Translate end of lines to windows style:

ConvertTo-WindowsLineEnding unixfile.txt windowsfile.txt

Need to convert it back?

ConvertTo-UnixLineEnding windowsfile.txt unixfile.txt


Think I'm going to like this tool...

Thursday, April 24, 2008

Powershell

I'm in a class for the second half of the week, the upshot of which is I've got a new scripting language to blog about... :-)

Here's the start of a script I'm converting to Powershell from perl. It examines my IP address, and determines what should be started (or stopped) when I'm at work.

The perl variant looks like this:

$address = gethostbyname("L353K79XPL");
$address = join(".",unpack("C"x4,$address));
if ($address =~ /123.321/) { ## fake subnet, sorry
debug_msg("on work network");
StartService("", "Wuser32");
StartService("", "ASMAgent");
StopService("", "RapApp");
StopService("", "VPatch");
StopService("", "BlackICE");
StopService("", "tunnelguardservice");
}


Here's the powershell version. It might not look it, but getting the IP is much more understandable to me.

if ((get-wmiobject win32_networkadapterconfiguration -filter "IPEnabled = true")[0].IPAddress -match "123.321.\d+.\d+")
{
write-host "at work"
start-service Wuser32
start-service asmagent
stop-service rapapp
stop-service vpatch
stop-service blackice
stop-service tunnelguardservice
}


More fun to come.

Monday, April 21, 2008

The script I want to write...


while (work_to_do) {
set distractions = 0;
set speed_of_execution = high;
do_work();
set not_at_work=soon;
}
while (not_at_work) {
select {}
when (sleeping) {
slow_down_clock();
set distractions = 0;
set phone = off; ## should be covered by above, but let's be safe...
}
when (awake) {
set distractions = 1;
set fun = high;
}
always {
set work_to_do = false;
}
}

Saturday, April 19, 2008

You've got to be thinking that I'm milking this topic to get more posts out there. You'd only be partially right -- there's a lot going on in this thing, so I'm trying to keep it in manageable pieces... This will be the last (and easiest) of the sections.

You've already seen how we go from sar (cpu and memory) data on a bunch of machines to graphs in pictures of that data. Now I just have to show you how that updates a website with the current data.

The last chunk of the collector.sh script looks like this:

for k in webservera webserverb
do
/usr/bin/tar cf - images |/usr/bin/ssh $k 'cd /data/websites/solarisreporting/production/root;/usr/bin/tar xf -;chmod 664 images/*'
done



Translation: gather up all the new images, and push them (securely) to the images directory of the two web servers hosting the reporting site.

Tuesday, April 15, 2008

What's harder to read?

So the whole goal behind the scripts this series of posts is covering was to provide an easier to read view of the utilization data coming out of sar. That's great, but in the interim, you and I have to look at some MUCH more difficult to read commands.

More excerpts from collector.sh. These come right after gathering the cpu data from sar and feeding it into rrd_cpu_collector.sh:

rrdtool graph /data/reporting/html/images/${i}_sar_thumb.png -u 100 -j -w 200 -h 80 -s `perl -e 'print time-86400'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 200
rrdtool graph /data/reporting/html/images/${i}_sar_daily.png -u 100 -t "last updated at $UPDATED" -s `perl -e 'print time-86400'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 481
rrdtool graph /data/reporting/html/images/${i}_sar_weekly.png -u 100 -s `perl -e 'print time-604800'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 481
rrdtool graph /data/reporting/html/images/${i}_sar_monthly.png -u 100 -s `perl -e 'print time-2592000'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 481
rrdtool graph /data/reporting/html/images/${i}_sar_yearly.png -u 100 -s `perl -e 'print time-31536000'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 481


I warned you, it's ugly. Okay, let's dissect one of those lines -- I'll pick out the daily one, since I showed you that graph a few posts ago.

rrdtool graph /data/reporting/html/images/${i}_sar_daily.png -u 100 -t "last updated at $UPDATED" -s `perl -e 'print time-86400'` DEF:linea=${i}_sarcpu.rrd:user:AVERAGE DEF:lineb=${i}_sarcpu.rrd:sys:AVERAGE AREA:lineb#00FF00:"Sys" LINE2:linea#FF0000:"User" |grep -v 481

So we call rrdtool and tell it we're going to generate a graph. We're saving it in /data/reporting/html/images/XXX_sar_daily.png, where XXX is the hostname we're iterating over. So far, not so bad. The -u 100 forces the graph to scale to 100%. The -t is the title we put on the graph (and $UPDATED is set earlier in the script to the current time). The -s is the start time for the graph (how far to go back). The perl scriptlet that's called prints the current time minus 24 hours. The DEF: bits say that the first line is the user cpu and the second is the system cpu, taken as averages from the rrd. We then pipe the output (a message saying a 481x??? pixel png graph was created) to grep -v 481 so we don't have pesky messages coming out of the script. The graph this generates is far easier to read than what goes into it:

Monday, April 14, 2008

Spring, time for (round) robin (databases)

On Friday I tried to tantalize with the ability to take some brutally dry numbers and automate graphing them so we could report on the utilization of all of our boxes. That post showed a rolling 24 hour graph of cpu and memory usage. Admittedly, that's usually what we're most interested in seeing. However, we also create rolling weekly, monthly, and yearly reports on the same site. So what? Well, that detail is important as part of the introduction to how we're generating the graphs.

Storing one day's worth of cpu and memory numbers collected every 5 minutes takes roughly 20K. If we needed to keep a year's worth of files around to do the trending graphs described above we'd have roughly 7M of data per host (and 730 files -- one each day for both cpu and memory). Instead, using rrdtool (http://oss.oetiker.ch/rrdtool), we store two 47K files for each host (one for cpu and one for memory). Rrdtool allows us to set up aggregation functions in a round robin database. Basically, we setup the database ahead of time, telling it how many data points to keep, and what kind of aggregation we want to use on that data. For our graphs, we feed data in every 5 minutes. For the rolling 24 hour graph, we display the data as it comes in, and store 600 datapoints. For our "weekly" graph, rrdtool averages 6 of those readings (30 minutes) and stores 700 of those aggregates. The monthly one averages 24 readings (2 hours) and stores 775 averages. The yearly one averages 288 entries (24 hours) and stores 797 averages.

Everyone I haven't bored to tears is now asking why we store significantly more data than we show in a graph. The answer is easy. I anticipated the request to compare today to yesterday, this week to last, etc., and I didn't want to have to go to backups to make that comparison.

Enough background. Let's look at some code. Here's an excerpt from my collector.sh script, which iterates over the hosts I report on:


for i in host1 host2 host3
do
for j in `/usr/bin/find /var/adm/sa -newer ${i}_sarcpu.rrd -name sa* -type f -exec /usr/bin/ls -ct {} \;`
do
/usr/bin/ssh $i sar -f $j |grep \:|grep -v usr|perl -pi -e 's/ +/,/g' |/usr/local/bin/rrd_cpu_collector.pl -host $i -

done
done


Translation -- for each host:
  • get the file names of all sar datafiles that have been modified more recently than my rrd database
  • ssh to the host and run sar against the file(s)
  • strip out all of the lines that don't have data (i.e. the headers) and all the extraneous whitespace, making it comma delimited
  • pass the data into rrd_cpu_collector.pl


So, what's in rrd_cpu_collector.pl? Two main things. First, there's logic to create the rrd if it doesn't exist:

if (-f $rrd_file) {
debug_msg("$rrd_file exists");
$rrd_last_updated=`rrdtool last $rrd_file`;

} else {
$rrd_time=timelocal(0,0,0,$mday,$mon,$year);
debug_msg("rrdtool create $rrd_file --start $rrd_time DS:user:GAUGE:1800:0:100 DS:sys:GAUGE:1800:0:100 RRA:AVERAGE:0.5:1:600 RRA:AVERAGE:0.5:6:700 RRA:AVERAGE:0.5:24:775 RRA:AVERAGE:0.5:288:797");
`rrdtool create $rrd_file --start $rrd_time DS:user:GAUGE:1800:0:100 DS:sys:GAUGE:1800:0:100 RRA:AVERAGE:0.5:1:600 RRA:AVERAGE:0.5:6:700 RRA:AVERAGE:0.5:24:775 RRA:AVERAGE:0.5:288:797`;
$rrd_last_updated=$rrd_time;
}

Second, there's logic to update the rrd with the data passed in from sar:

foreach $entry (<>){
chomp ($entry);
debug_msg("the entry is $entry");
###################################################
## update the hours and minutes based on what we ##
## get from sar. use this to generate the time ##
## for the rrd command ##
###################################################
($sar_time,$user_cpu,$sys_cpu,$wio,$idle)=split(/,/,$entry);
($hours,$min,$sec)=split(/:/,$sar_time);
debug_msg("sar hours min sec are $hours $min $sec");
$rrd_time=timelocal(0,$min,$hours,$mday,$mon,$year);
debug_msg("rrd_time is $rrd_time");
####################################################
## this check is here to allow us to iterate over ##
## the full output of sar several times a day w/o ##
## reprocessing an entry. ##
####################################################
if ($rrd_time > $rrd_last_updated) {
debug_msg("rrdtool update $rrd_file $rrd_time:$user_cpu:$sys_cpu");
`rrdtool update $rrd_file $rrd_time:$user_cpu:$sys_cpu`;
} else {
debug_msg("rrd file updated more recently than this entry. skipping");
}
}


Up next, generating the cpu graphs now that the data is in place.

Friday, April 11, 2008

Pushmi-pullyu

It turns out that the infrastructure supporting our push process is also very handy for pulling data to a central location. The next couple posts are going to talk about a reporting process I've built for monitoring general utilization of boxes in the environment. Central to the process is the ability to reach out to all of the boxes and run commands, piping the data from those commands back to the central server. For the reports I'm creating, that data is the output of sar (the System Activity Reporter).

The raw cpu data looks something like this:

SunOS hostname.domain.com 5.9 Generic_122300-12 sun4u 04/11/2008
00:00:00 %usr %sys %wio %idle
01:40:00 0 2 24 74
01:45:00 0 2 24 74
01:50:00 0 2 24 74
01:55:00 0 2 24 74
02:00:00 0 2 24 74
02:05:00 1 1 26 72
02:10:00 0 1 25 74
02:15:00 0 2 24 74
02:20:00 0 2 24 74
02:25:00 0 2 24 74
02:30:00 0 2 24 74

Raw memory data looks something like this:

SunOS hostname.domain.com 5.9 Generic_122300-12 sun4u 04/11/2008
00:00:00 freemem freeswap
01:40:00 948072 15057081
01:45:00 948002 15055725
01:50:00 948028 15056531
01:55:00 948021 15056523
02:00:00 947853 15052417
02:05:00 947898 15052517
02:10:00 947826 15051640
02:15:00 947744 15049178
02:20:00 947905 15052163
02:25:00 947799 15050548
02:30:00 947914 15053483

That is only moderately useful, though. Far more pleasant to view are some simple graphs like these:



The next few posts will show you how to go from cpu percentages and counts of free memory pages to graphs of those values. They'll also talk about how doing that can (and does) regularly update a reporting website so you can provide up to date utilization graphs. Most important, all of this complies with my general philosophy -- automate everything that you have to do repeatedly.

Thursday, April 10, 2008

You used it for what?

I'm the king of using tools for purposes they were not intended to serve, so (almost) no apologies for this one. I was trying to learn some about ant, and was flipping through some doc looking for ways it would make my life easier. I don't have large builds that need to be automated, or significant deploys, or basically any of the reasons a normal person would use ant. What I do have is the occasional "need" to email a file to someone coupled with a strong dislike for how the navigation to that file works when attaching it in Notes. Enter ant.

First, I setup windows to open the .ant extension with the ant interpreter:
  • Open Explorer
  • Select Tools->Folder Options
  • Select the File types tab
  • If ant is not present, click New.
  • Select the ant entry, and click advanced
  • In the edit file type dialog, select new action
  • Enter "open" (no quotes) for the action
  • For application used, enter (with quotes) "C:\apache-ant-1.6.5\bin\ant.bat" -file "%1"
I have a shortcut with the following target in my Windows Explorer Send To menu:
C:\apache-ant-1.6.5\bin\ant.bat -f c:\utils\testing\ant\ant_mail.ant -Dfilename= 

It's important to note that the -Dfilename= doesn't hard code the file name because it is fed to the shortcut when you do the Send To.

Here's the script it calls (ant_mail.ant):

<project name="ant_mail_file" default="mail_success" basedir="c:\utils\testing\ant">
<property file="ant_mail.properties" />
<description>Email a file passed as a parameter</description>
<target name="mail_success">
<mail mailhost="mailhost.mydomain.com" mailport="25" subject="${filename} ${subject}">
<from address="${from_address}"/>
<replyto address="${from_address}"/>
<to address="${to_address}"/>
<message>Attached is ${filename}</message>
<fileset file="${filename}">
</fileset>
</mail>
</target>
</project>


Finally, I have the following ant_mail.properties file in the same directory as ant_mail.ant:

subject=
from_address=john.mcdevitt@mydomain.com
to_address=john.mcdevitt@mydomain.com


If I'm planning ahead (rarely), I can mess with the addresses or the subject properties to make the email a little more meaningful than just having a subject line of the file name. However, the main goal is served, I can easily forward the message to whomever I wanted to send the doc to, and I get to avoid the ugly Notes attachment interface.

Wednesday, April 9, 2008

Expect-ing some changes

We recently put a process in place to monitor the configuration of our fibre channel switches for unauthorized changes. Definitely a good idea, however it was potentially a very tedious process. The initial plan went something like this:
  • login to a switch
  • run configupload, sending the config to an ftp server
  • pore over the config, comparing it to the "gold" copy
  • repeat for the other 3 switches
My general philosophy is to automate anything I have to do repeatedly, so this was an unacceptable plan (even though I wasn't going to be doing the review -- it was the principle). The problem was that the switches are configured to only allow ssh logins, and we couldn't use our normal ssh key to login to them without providing a password (vendor box that doesn't allow that change). Enter expect.

Expect is a tool for "programmed dialogue with interactive programs." In our case that means supplying passwords to programs (ssh) that prompt for them. The new plan looks something more like this:
  • write two scripts
    • one that does a login to a switch and a configupload from that switch
    • one that iterates over the 4 switch names, runs the other script for each, and then runs diff to compare this config to the previous one, reporting on changes
  • take a vacation
I much prefer that plan.

Here are the scripts. I've pulled comments from them to shorten this post a bit, and changed the names to protect the guilty. The first of those scripts (the one that runs configupload):

#!/opt/sfw/bin/expect
spawn /usr/bin/ssh -l root [lindex $argv 0]
expect "password"
send "XXXXXXXXXX\r"
expect "root>"
send "configupload -p scp HOSTNAME,USERNAME,/data/switch_config/[lindex $argv 1]\r"
expect "root>"
send "exit\r"
expect "logout"
exit 0


and here's the second one that calls the expect script above:

#!/usr/bin/bash
DATE=`/usr/bin/date +%m%y`
for i in switch1a switch2a switch1b switch2b; do
/usr/local/bin/switch_backup.exp $i $i.$DATE
echo Switch comparision for $i >> /tmp/$$.tmp
echo >> /tmp/$$.tmp
echo >> /tmp/$$.tmp
/usr/bin/diff /data/switch_config/${i}_baseline /data/switch_config/$i.$DATE >> /tmp/$$.tmp
echo >> /tmp/$$.tmp
echo >> /tmp/$$.tmp
/usr/bin/mv /data/switch_config/$i.$DATE /data/switch_config/${i}_baseline
done
/usr/bin/cat /tmp/$$.tmp |/usr/bin/mailx -s "switch comparison on `/usr/bin/date +%m/%d/%Y`" mygroup@mycompany.com
/usr/bin/rm /tmp/$$.tmp


Now we just have to look at an email that only contains the changes (and accepted changes don't continue to show up -- this config becomes the baseline for the next compare).

Must have been in the twilight-zone

Meant to post this post yesterday, and don't have much of an excuse for why I didn't. Here's the low down on how we build the zone_map file (and an example of the things we put in the nightly script instead of the frequent push)

From update_push_nightly.sh:

for i in `awk '{ print $1 }' /usr/local/bin/host_list;do
for j in `ssh $i '/usr/sbin/zoneadm list' |grep -v global`
do
echo $i,$j>>/tmp/nightly.tmp
done
done
/usr/bin/mv /tmp/nightly.tmp /usr/local/bin/zone_map


Why only generate it nightly? Well, while we can rip off a zone pretty quickly, we try not to do it all day every day... And it's easy enough to update if we do.

Monday, April 7, 2008

Get the map, we're in the zone...

As indicated previously, we push around /usr/local/bin/zone_map every 10 minutes. What's up with that? And more basically, what is that?

zone_map and the associated scripts zone_to_physical.pl and physical_to_zones.pl are used to remind users/admins of the global to local (and vice versa) zone relationships (http://www.sun.com/bigadmin/content/zones/ -- the totally inadequate summary being "zone =~ virtualized solaris environment").

zone_map is generated dynamically (future (brief) post), but ends up looking something like this:

global_zone1,local_zone1a
global_zone1,local_zone1b
global_zone2,local_zone2a

physical_to_zones.pl and zone_to_physical.pl are hard links to the same file, the guts of which look like this:


open (ZONEMAP,"/usr/local/bin/zone_map") or die;
foreach $line (){
chomp $line;
($global_zone,$zone)=split(/,/,$line);
debug_msg("working with global zone $global_zone and local zone $zone");
push(@{ $global_zones{$global_zone} }, $zone);
}
close (ZONEMAP);
if ($#ARGV == 0) {
debug_msg("ARGV defined");
$input = shift(@ARGV);
chomp($input);
debug_msg("input is $input");
}
if ($0 =~ /physical_to_zones/) {
debug_msg("called as physical_to_zones");
if ($input){
if ($global_zones{$input}){
print "$input runs:\n";
foreach $zone(sort @{$global_zones{$input}}) {
print "\t$zone\n";
}
} else {
print "zone information for $input not defined\n";
}
} else {
foreach $global_zone (sort keys %global_zones) {
print "$global_zone runs:\n";
foreach $zone (sort @{$global_zones{$global_zone}}) {
print "\t$zone\n";
}
}
}
} else {
debug_msg("must have been called as zone_to_physical");
unless ($input) {
print "zone_to_physical.pl requires a zone name as input\n";
print "maybe you wanted physical_to_zones.pl?\n";
exit;
}
foreach $global_zone (keys %global_zones) {
@local_zones = @{$global_zones{$global_zone}};
foreach $zone (@local_zones){
if ($zone =~ /$input/i){
print "$input runs on $global_zone\n";
exit;
}
}
}
}

If you run zone_to_physical.pl local_zone1a you get local_zone1a runs on global_zone1
If you run physical_to_zones.pl you get:

global_zone1
local_zone1a
local_zone1b
global_zone2
local_zone2a

Friday, April 4, 2008

Getting a little pushy...

In our Solaris environment, we use a pair of machines for jumpstart, home directory hosting, and (here's the potentially unique bit) pushing configuration files or changes to the rest of the environment. The push process is the first component of a strategy for keeping OS configuration consistent across the environment. It certainly has its limitations, but that's fodder for another post...

The setup
Since these boxes are our jumpstart servers, they know about all of the other Solaris machines in the environment. We leverage that to make sure we are pushing to everyone:

...
open (ETHERS,"/etc/ethers");
foreach $line(<ETHERS>){
$hostname = (split(/ /,$line))[1];
chomp $hostname;
...
}
close (ETHERS);
...


The second core piece is our ssh configuration. To do a hands-off push to all of our hosts, we need to have the ability to login to them without entering a password on each. To accomplish that, the jumpstart servers both have the private portion of an ssh-keygen generated key pair. As part of the jumpstart process, they populate root's .ssh/authorized_keys file with the public portion of that key pair. They also put a modified version of the sshd_config file on each box during the jump:

...
#PermitRootLogin no
PermitRootLogin without-password
...


What, when, and how
We now have an infrastructure that both gives us the names of all the hosts in the environment and guarantees we can do an administrative login to them over ssh. Sounds like a hacker's dream, but I had to sign an agreement to only use my power for good, so it's time to start pushing. Another snippet from a push script:

$ssh_pid = open (SSH,"|/usr/bin/sftp $hostname >/dev/null 2>&1");
print SSH "put /etc/group /etc/group\n";
print SSH "put /data/passwd.$hostname /etc/passwd\n";
print SSH "put /etc/shadow /etc/shadow\n";
print SSH "put /etc/project /etc/project\n";
print SSH "put /etc/auto_home /etc/auto_home\n";
print SSH "put /usr/local/bin/zone_map /usr/local/bin/zone_map\n";
print SSH "put /opt/sfw/etc/sudoers /opt/sfw/etc/sudoers\n";
print SSH "quit\n";
debug_msg("closing connection to $hostname");
close (SSH);


That's (part of) the "What and the How," and here's the "When"

37 2 * * * /usr/local/bin/update_push_nightly.sh
0,10,20,30,40,50 * * * 1-5 /usr/local/bin/update_push_frequent.pl


Next week we'll talk about what would go into the nightly instead of the frequent push. Also, we'll look at what's in the zone_map, how it got there, and what uses it.

Thursday, April 3, 2008

SOS

- .... .. ... .-. .- -. -.- ... .--. .-. . - - -.-- .... .. --. .... --- -. - .... . .-.. .. ... - --- ..-. ..- ... . .-.. . ... ... ... -.-. .-. .. .--. - ... .. .----. ...- . .-- .-. .. - - . -. .-.-.- -.. --- -. .----. - .-- .- -. - - --- - .... .. -. -.- - --- --- -- ..- -.-. .... .- -... --- ..- - .-- .... .- - -- .. --. .... - -... . .... .. --. .... . .-. --- -. - .... . .-.. .. ... - -....- -....- -- -.-- -... --- ... ... -- .. --. .... - .-. . .- -.. - .... .. ... .- -. -.. .. .----. -.. .... .- ...- . ... --- -- . .----. ... .--. .-.. .- .. -. .. -. --. - --- -.. --- .-.-.- .-.-.- .-.-.-

#!/usr/bin/perl
##########################################################################
## Script Name : morse.pl ##
## Created : ##
## Author : John McDevitt ##
## Function : convert between morse and ascii ##
## : ##
## Usage : interactive -- run and follow prompts ##
## : ##
## : ##
## Notes : ##
## : ##
## Update Log : ##
## : ##
##########################################################################
use strict;
use Convert::Morse qw(as_ascii as_morse is_morse);
print "enter text for conversion: ";
my $input=;
if (is_morse($input)) {
print as_ascii($input),"\n";
} else {
print as_morse($input),"\n";
}

Wednesday, April 2, 2008

Going on a trip? Pack and unpack to find your route...

I hate puns. Excuse me while I slap myself for the title on this post...

Today's entry is a script I wrote some time ago after I'd explained how subnets work, and how to determine if two IP addresses were on the same network or if they had to be routed. It also was an opportunity for me to use the pack and unpack functions of perl, which I hadn't done before. The script takes three arguments -- two IP addresses (or host names if your box will resolve them) and a netmask (in either decimal or hex notation -- both 255.255.254.0 and fffffe00 are acceptable). It then shows you the hexadecimal and ascii representations of the network each host/IP is on, and tells you if you would be routing between the two (originally it only returned "route" or "no routing," but I wanted to see the network numbers so the output is currently a bit redundant).

Without further ado:


#!/usr/bin/perl
use strict;
my ($address1, $address2, $mask, $and1, $and2);
if (@ARGV != 3) { die "should be called with three arguments: source, destination, and netmask\n"}

$address1 = $ARGV[0];
$address2 = $ARGV[1];
$mask = $ARGV[2];
unless ($address1 =~ /\d.\d/) {
$address1 = join(".",unpack("C"x4,gethostbyname($address1)));
}
unless ($address2 =~ /\d.\d/) {
$address2 = join(".",unpack("C"x4,gethostbyname($address2)));
}
if ($mask =~ /[a-e]/) { # hex formatted mask (e.g., fffffe00)
my $sm1 = substr($mask,0,2);
my $sm2 = substr($mask,2,2);
my $sm3 = substr($mask,4,2);
my $sm4 = substr($mask,6,2);
$sm1=hex($sm1);
$sm2=hex($sm2);
$sm3=hex($sm3);
$sm4=hex($sm4);
$mask = "$sm1.$sm2.$sm3.$sm4";
}

# unpack to hex format so it is printable -- e.g., aa0bc000 (just doing for debugging)
$and1 = unpack "H*",(pack("C"x4,(split(/\./,$address1))) & pack("C"x4,(split(/\./,$mask))));
$and2 = unpack "H*",(pack("C"x4,(split(/\./,$address2))) & pack("C"x4,(split(/\./,$mask))));
print "network 1 $and1 " . hex(substr($and1,0,2)) . "." . hex(substr($and1,2,2)) . "." . hex(substr($and1,4,2)) . "." . hex(substr($and1,6,2)) . "\n";
print "network 2 $and2 " . hex(substr($and2,0,2)) . "." . hex(substr($and2,2,2)) . "." . hex(substr($and2,4,2)) . "." . hex(substr($and2,6,2)) . "\n";
if ($and1 =~ /$and2/) { print "no routing\n" } else { print "route\n"}

Tuesday, April 1, 2008

Formatting

Sorry about the formatting of the script on the previous post. I'll try to make it look nicer (and more cross browser consistent), but if you want a copy I can email it to you. Just let me know.

Always have a backup...

I don't know about you, but I've been known to make a mistake every now and again. After a particularly egregious oops moment, I wrote stamp.pl. This script doesn't do anything particularly impressive -- it just makes a date stamped copy of any file you pass in -- but it's saved my bacon more than once. I find it particularly useful to have in the "Send To" right-click menu in Windows Explorer (although it's in all of my home directories, and in my path on Solaris -- I use it everywhere).

I'm attaching the whole script -- it's bigger than it strictly needs to be, but you get to see my template for building anything in perl. If you want to use it, you'll need to replace XYZ with a valid hostname for your environment. You can also run perldoc or pod2html on the script to see the documentation.

#!/usr/bin/perl

=head2 stamp.pl

=head4 Main Comment Block

##########################################################################
## Script Name : stamp.pl (may be called stamp w/o the .pl) ##
## Created : 01/31/2006 ##
## Author : John McDevitt ##
## Function : date stamp a file ##
## : ##
## Usage : stamp.pl file ##
## : ##
## : ##
## Notes : ##
## : include debugging messages by calling debug_msg ##
## : e.g., debug_msg("about to do something weird"); ##
## : can have multiple levels of debug info -- see the ##
## : mailto code in the do_setup routine for examples. ##
## : ##
## Update Log : 2/6/2005 -- add $ to pattern match to anchor it to ##
## : the end of the file path. ##
## : ##
##########################################################################

=cut

use strict;
use Getopt::Long;
use IO::File;
use POSIX qw(tmpnam);
use File::Copy;

my ($opt_debug,$opt_mailto,$opt_help);
my ($hostname,$temp_file_name,$temp_file_handle);
my ($oldfile,$newfile,$filedir,$filebase,$fileext,$date);
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst);
my ($smtp);
do_setup();

##########################################################################
## Real code follows ##
##########################################################################

$oldfile = shift(@ARGV) || die "usage: stamp
\n";
if ($oldfile =~ /(\w+)\.(\w+)$/){
$filedir = $`;
$filebase = $1;
$fileext = $2;
} elsif ($oldfile =~ /(\w+)$/){
$filedir = $`;
$filebase = $1;
}
if ($fileext) {
debug_msg("directory is $filedir, base is $filebase, extension is $fileext");
} else {
debug_msg("directory is $filedir, base is $filebase");
}
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year = $year +1900;
$mon = $mon + 1;
unless ($mon =~ /\d\d/) {
$mon = "0" . $mon;
}
unless ($mday =~ /\d\d/) {
$mday = "0" . $mday;
}
unless ($min =~ /\d\d/) {
$min = "0" . $min;
}
unless ($hour =~ /\d\d/) {
$hour = "0" . $hour;
}
if ($fileext) {
$newfile = $filedir . $filebase . "_" . $year . $mon . $mday . "_" . $hour . $min . "." . $fileext;
} else {
$newfile = $filedir . $filebase . "_" . $year . $mon . $mday . "_" . $hour . $min;
}
debug_msg("new file is $newfile");
copy($oldfile, $newfile);

##########################################################################
## Real code precedes ##
##########################################################################

##########################################################################
## Setup code follows ##
##########################################################################

sub do_setup{
######################################################################
## GetOptions is a function exported from Getopt::Long. it is like ##
## using the perl -s flag, but is more robust in its parsing, and ##
## simpler to implement. Options must be specified to uniqueness. ##
## A colon makes a value optional and = makes a value required. ##
## Absence of either is for a switch. i and s after a colon or = ##
## indicate integer and string types. If a value isn't provided for##
## a string type, the variable gets '', if one isn't provided for an##
## integer, it gets 0. ##
######################################################################
GetOptions ('debug+' => \$opt_debug,'mailto:s' => \$opt_mailto, 'help' => \$opt_help);

debug_msg("debugging enabled");
if ($opt_mailto) {
$hostname = `hostname`;
if ($opt_debug > 1) { debug_msg("in mailto code before resetting stdout/stderr"); }
if ($opt_debug > 1) { debug_msg("debugging level is $opt_debug\n"); }
###################################################
## get a file handle to a temp file where we know##
## the file name. set it to autoflush so output ##
## isn't buffered -- need this so we know if any ##
## prints have happened when getting ready to ##
## shut down. ##
###################################################
do { $temp_file_name = tmpnam() } until $temp_file_handle = IO::File->new($temp_file_name, O_RDWR|O_CREAT|O_EXCL);
$temp_file_handle->autoflush(1);
if ($opt_debug) { debug_msg("temp file name is $temp_file_name"); }

###################################################
## the END{} block is executed when perl is exits##
## even if it is as a result of a die function or##
## from an internally generated exception, e.g. ##
## when you try to call an undefined function ##
###################################################
END {
if (-s $temp_file_name) {
use Net::SMTP;
$smtp = Net::SMTP->new('mailhost.XYZ.com');
if ($ENV{USERNAME}){
$smtp->mail($ENV{USERNAME});
} else {
$smtp->mail("$hostname");
}
$smtp->to($opt_mailto);
$smtp->data();
$smtp->datasend("To: $opt_mailto\n");
$smtp->datasend("\n");
seek($temp_file_handle,0,0) or die "seek: $!";
$smtp->datasend(<$temp_file_handle>);
$smtp->dataend();
$smtp->quit;
}
if (-e $temp_file_name) {unlink($temp_file_name) or die "Couldn't unlink $temp_file_name : $!";}
}
###################################################
## redirect standard out and error to temp file ##
###################################################
*STDOUT = *$temp_file_handle;
*STDERR = *$temp_file_handle;
debug_msg("in mailto code. sending to $opt_mailto");
}
if ($opt_help) {
print "Usage: $0 [-d|--debug] [-m=address|-mailto=address] filename\n";
print "-d options enable debugging. Can be repeated for more verbosity\n";
print "-m options enable emailing stdout and stderr\n";
print "-h Print this message and exit\n";
exit;
}
}

sub debug_msg{
if ($opt_debug) {
print "DEBUG: @_\n";
}
}

##########################################################################
## Setup code precedes ##
##########################################################################

=head2 SYNOPSIS

stamp.pl [-d] [-m john.mcdevitt@XYZ.com] filename

=head2 DESCRIPTION

creates a copy of the input file in the same location with a date/time stamp

=head2 EXAMPLES

=over 4

=item 1.

C


Returns help message:
Usage: ./stamp.pl [-d|--debug] [-m=address|-mailto=address] filename
-d options enable debugging. Can be repeated for more verbosity
-m options enable emailing stdout and stderr
-h Print this message and exit


=item 2.

C


copies test.txt to test_20060131_1349.txt

=item 3.

C


copies test to test_20060131_1349

=back

=cut

Monday, March 31, 2008

Generating a 64 digit hex key...

In fewer than 64 keystrokes.
perl -e 'print ((0..9,a..f)[rand(16)]) for 1..64; print "\n"'

That's all one line in case it wraps for you. Credit to Brian for the code.

Friday, March 28, 2008

Aren't all end of lines equal?

If you've ever opened a text file from Unix using Notepad, you already know the answer; no, they are not. Windows represents the end of line (EOL) with CRLF (0x0d 0x0a), whereas Unix's EOL is just the LF (0x0a). Usually this isn't a problem. View your log files over samba using TextPad or another editor that's smarter than notepad and you're good to go. But what if you need to convert from one format to the other? Enter perl. On a Unix box, to "correct" the Windows EOL, run perl -pi -e "s/\r//" filename. Translation: run perl, doing an in place edit on the input file (the -i), printing each line in the file after edited (the -p) executing (the -e) a substitution (s///) of every carriage return (\r) with nothing. On Windows, to "correct" the Unix EOL, run perl -pi.bkp -e '' filename. This translation is a bit odder. The Windows version of perl won't let you do an in place edit w/o a backup (the .bkp). The Windows Perl recognizes both versions of EOL, but always prints the Windows one, so we just do an in place edit that boils down to reading and printing each line.
counter free hit invisible