Friday, September 11, 2015

Iostat


                                                                     Iostat


Alot of time we need to find out disk IO utilization, measure if all disks are performing well and monitor system input/output device loading by observing the time the physical disks are active in relation to their average transfer rates.  I below example i ran dd on disk "sdb" and ran iostat in back ground, its seen disk is busy i.e %util is almost 100%.

dd bs=1M count=4096 if=/dev/zero of=/dev/sdb oflag=direct    ( Directly writing to disk )

Iostat was run with below options where m = display numbers in MB  -t = print time stamp
-x = Display extended statistics

 iostat -m -t -x 1

09/11/2015 02:28:39 PM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00      2.27      41.48      0.00   56.25
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00          0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00          0.00   44.00     0.00    22.00  1024.00     1.57   34.82  22.68  99.80
sdc               0.00     0.00          0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.00     0.00          0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sde               0.00     0.00          0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00



Line 1 : On First line it prints time
Line 2 and 3 : Print CPU stats/utilization , here we see CPU is still idle

Line 5 --- n : Prints various stats for each disk ( Stat printed mean as below ) , I will shortly explain which once are what and what to infer out of this numbers.


  • rrqm/s : The number of read requests merged per second that were queued to the hard disk
  • wrqm/s : The number of write requests merged per second that were queued to the hard disk
  • r/s : The number of read requests per second
  • w/s : The number of write requests per second
  • rsec/s : The number of sectors read from the hard disk per second
  • wsec/s : The number of sectors written to the hard disk per second
  • avgrq-sz : The average size (in sectors) of the requests that were issued to the device.
  • avgqu-sz : The average queue length of the requests that were issued to the device If one complains about I/O performance issues when avgqu-sz is lower, then it is application specific stuff, that can be resolved with more aggressive read-ahead, less fsyncs, etc. One interesting part – avqu-sz, await, svctm and %util are iterdependent ( await = avgqu-sz * svctm / (%util/100)
  • await : The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. If there is not alot of I/O being created but there are requests just pending it could be due to disk being slow due to H/W issue.
  • svctm : The average service time (in milliseconds) for I/O requests that were issued to the device
  • %util : Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%. Also this value is excluding any kind of cache here – if request can be served from cache, the chance is quite negligible it will show up in %util, unlike in other values. 


If below values from the iostat output is high it would mean the specific disk is under pressure: Clearly in above example sdb is being utilized a lot.
  1. The average service time (svctm)
  2. Percentage of CPU time during which I/O requests were issued (%util)
  3. If a hard disk reports consistently high reads/writes (r/s and w/s)
  4. await is continuously high  ( Very important ) . 
  5. avgqu-sz is continuously high ( very important )

Note :- " -n " option Displays the network filesystem (NFS) report


commonly accepted averages
Rotational Speed (rpm)IOPS
540050-80
720075-100
10k125-150
15k175-210


No comments:

Post a Comment