How to mount dd images under Linux

For a raw filesystem try:

fdisk -l harddrive.img
mount -o ro,loop,offset=xxxxxxxxx harddrive.img /mnt/loop

or for filesystems with volume groups etc try:

losetup /dev/loop0 disk.img
kpartx -a /dev/loop0

Then to mount the first partition:

mount /dev/mapper/loop0p1 /mnt

Or to activate the volume group then mount the logical volume:

vgscan
vgchange -ay vg
mount /dev/vg/lv /mnt

Hope that helps.

Enable Apache’s inbuilt chroot functionality

This works on all versions of Apache webserver greater than 2.2.10.
I’ll presume you have a current working version of Apache serving files from /var/www/

mkdir -p /chroot/var/ 

Required for PHP5 compatibility:

mkdir -p /chroot/var/lib/php5
chown root:www-data /chroot/var/lib/php5
chmod 770 /chroot/var/lib/php5
cp /etc/localtime /chroot/etc/localtime
cp -R /usr/share/zoneinfo /chroot/usr/share/zoneinfo
cp -R /usr/share/apache2 /chroot/usr/share/apache2 
mv /var/www /chroot/var/ 

To help with compatibility and user / sysadmin expectations

ln -s /chroot/var/www /var/www 

Enable Apache’s in-built chroot (Debian)

echo "ChrootDir /chroot" > /etc/apache2/conf.d/chroot 

Enable Apache’s in-built chroot (Redhat/CentOS/Fedora)

echo "ChrootDir /chroot" >> /etc/httpd/conf/httpd.conf 
semanage fcontext -a -t httpd_sys_content_t “/chroot/var/www(/.*)?”
service apache2 restart 

Now test your damn website! Logfiles are your friend for troubleshooting any bugs :-)

How to measure IOPS with linux

So many times I need to measure the amount of IOPS on a Linux disk/storage system. While there are many tools for the jobs they just don’t seem to give you a ‘number’. For example Splunk indexers require 1200+ IOPS according to hardware recommendation guides but how do you find out if your any where close to that number? Use ‘bonnie++’, ‘iozone’ or perhaps ‘fio’? Well use any of those tools will create the type of read / write sequence you would like to replicate - but where the damn magic number???

Easiest two ways are:
Method #1:
run iozone -a (or bonnie++) in one screen then in another session / terminal use nmon, pressing D (capital D) to get disk stats and get the number from the Xfers column. This is your magic number (or IOPS reading)

┌nmon─14i─────────────────────Hostname=reddragon─────Refresh= 2secs ───19:51.57─
│ Disk I/O ──/proc/diskstats────mostly in KB/s─────Warning:contains duplicates─
│DiskName Busy    Read    Write       Xfers   Size  Peak%  Peak-RW    InFlight
│sda       99%    699.9     14.0KB/s  178.0   4.0KB  493%    3658.8KB/s   1   
│sda1       0%      0.0      0.0KB/s    0.0   0.0KB    0%       0.0KB/s   0   
│sda2      99%    699.9     14.0KB/s  178.0   4.0KB  493%    3658.8KB/s   1   
│dm-0       0%      0.0      0.0KB/s    0.0   0.0KB    0%       0.0KB/s   0   
│dm-1      99%    699.9     14.0KB/s  178.5   4.0KB  494%    3658.8KB/s   1   
│dm-2       0%      0.0      0.0KB/s    0.0   0.0KB   76%    2553.5KB/s   0   
│Totals Read-MB/s=2.1      Writes-MB/s=0.0      Transfers/sec=534.4 

In the above example I’m getting about 178 IOPS for my disk ’sda’

Method #2:
run fio with the correct workload (google how to use fio) and while it’s running it will actually tell you the IOPS.

[root@reddragon ~]# fio random-read-test.fio 
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [r] [85.1% done] [736K/0K/0K /s] [184 /0 /0  iops] [eta 00m:28s]

In this example I am getting 184 IOPS. Also if you wait until fio finishes it run - you can the IOPS reading from there. Eg.

random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [r] [98.9% done] [2224K/0K/0K /s] [556 /0 /0  iops] [eta 00m:02s]
random-read: (groupid=0, jobs=1): err= 0: pid=7239: Tue Feb 25 16:49:16 2014
  read : io=131072KB, bw=747406 B/s, iops=182 , runt=179578msec
    clat (usec): min=107 , max=117530 , avg=5473.62, stdev=4112.08
     lat (usec): min=107 , max=117531 , avg=5473.93, stdev=4112.08
    clat percentiles (usec):
     |  1.00th=[  245],  5.00th=[  302], 10.00th=[  370], 20.00th=[ 2480],
<SNIP>

As you can see: iops=182 - pretty consistent with the other results!

Stop puppet spamming /var/log/messages

Hate Spam? Hate Puppet Spam in var/log/messages more? Quick buy now!

# Edit /etc/puppet/puppet.conf
# In the [agent] section add:
syslogfacility = local6

# Edit: vi /etc/rsyslog.conf 
# Do a quick to see if local6 is being used somewhere else first... 
# Add a line:
local6.*    /var/log/puppet/puppet.log

# Add setup the permissions and file
touch /var/log/puppet/puppet.log
chown puppet:puppet /var/log/puppet/puppet.log
chmod 640 /var/log/puppet/puppet.log
service rsyslog reload

And your done!

GlusterFS: host, is not befriended at the moment

When attempting to expand the replication distrubted file system of our cluster nodes I struck the error message:

[root@index03 ~]# gluster volume replace-brick drvol01 index04:/srv/gluster/drbrk02 index05:/srv/gluster/drbrk01 start
index05, is not befriended at the moment

Which was strange considering the previous ‘peer probe’ from another host to join the new server (index05) was successfully:

[root@index04 ~]# gluster peer probe index05
Probe successful 

I think the only clue was (on the last line):

[root@index03 drbrk02]# gluster peer status
Number of Peers: 4

Hostname: index02
Uuid: 31c0f246-0a8b-4c68-8ffa-42b2e6bb42ce
State: Peer in Cluster (Connected)

Hostname: index04
Uuid: ca61b983-b8fd-4fcc-848a-6c33f102cd5c
State: Peer in Cluster (Connected)

Hostname: index01
Uuid: b9bc474c-0017-4191-82b9-5760ba0d00fc
State: Peer in Cluster (Connected)

Hostname: index05
Uuid: c6a7cb83-4c1f-42ff-8421-bea9cc911d0f
State: Accepted peer request (Connected)

Anyway I confirmed that there were no firewall rules blocking the requests Doco: Gluster Firewall Ports and restarted the glusterd service on index05, which seems to have resolved the problem.

[root@index03 ~]# gluster peer status
Number of Peers: 4

Hostname: index02
Uuid: 31c0f246-0a8b-4c68-8ffa-42b2e6bb42ce
State: Peer in Cluster (Connected)

Hostname: index04
Uuid: ca61b983-b8fd-4fcc-848a-6c33f102cd5c
State: Peer in Cluster (Connected)

Hostname: index01
Uuid: b9bc474c-0017-4191-82b9-5760ba0d00fc
State: Peer in Cluster (Connected)

Hostname: index05
Uuid: c6a7cb83-4c1f-42ff-8421-bea9cc911d0f
State: Peer in Cluster (Connected)

In summary I believe there were firewall rules in place, when the ‘gluster peer probe index05′ was run. The rules were removed, but the gluster daemon on index05 then required a restart to become active in the cluster.