raid not spinning down – mdadm issuing syncs?

I have an issue with an archival server that is running a RAID 5. The server is being accessed only every couple of days, so I want these disks to spin down when there is no activity for a while.

Disclaimer: I understand that spinning down disks is normally bad practice. I am not asking for advice on disk lifespan. I am asking for help on how to make spin downs happen. Thank you.

The file system is ext4. I have ramped up the commit interval of ext4 through the appropriate mount option and verified that there is no activity from jbd2. I have also configured systemd-journald to volatile mode and disabled any other non-essential logging. I have 100% verified that no log files are being written and no user-space processes have IO activity. Swapping is off.

Still, iosnoop is showing periodic writes to sectors 2056, 2064, and 2088 of the disks in the array. I suspect that this is where the superblock or related information is stored. My working theory is that mdadm is marking the RAID as synced or something like that, but I failed to Google any relevant information.

Does anyone have an alternative theory or an idea on how I can stop the IO?

Here’s an iosnoop trace for the first disk in the array:

# iosnoop-perf -s -d "8,16"
Tracing block I/O. Ctrl-C to end.
STARTs          COMM         PID    TYPE DEV      BLOCK        BYTES     LATms
5068.962692     md0_raid5    249    FF   8,16     18446744073709551615 0          0.35
5068.963054     <idle>       0      WFS  8,16     2064         4096      21.28
5068.990201     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40
5068.990619     <idle>       0      WFS  8,16     2056         512       18.70
5069.017432     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42
5069.017866     <idle>       0      WFS  8,16     2088         3072      24.86
5069.442687     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40
5069.443104     <idle>       0      WFS  8,16     2064         4096       7.90
5069.467942     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40
5069.468360     <idle>       0      WFS  8,16     2056         512       57.62
5074.578771     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41
5074.579195     <idle>       0      WFS  8,16     2064         4096      21.82
5084.818728     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41
5084.819146     <idle>       0      WFS  8,16     2088         3072      31.92
5125.794841     md0_raid5    249    FF   8,16     18446744073709551615 0          0.35
5125.795205     <idle>       0      WFS  8,16     2064         4096      22.49
5125.823437     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41
5125.823855     <idle>       0      WFS  8,16     2056         512       18.83
5125.850640     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42
5125.851071     <idle>       0      WFS  8,16     2080         4096       8.33
5125.859599     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42
5125.860026     <idle>       0      WFS  8,16     2064         4096       7.67
5126.146833     md0_raid5    249    FF   8,16     18446744073709551615 0          3.50
5126.150353     <idle>       0      WFS  8,16     2064         4096       8.98
5126.159498     md0_raid5    249    FF   8,16     18446744073709551615 0          4.39
5126.163913     <idle>       0      WFS  8,16     2056         512       53.75
5131.410989     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41
5131.411412     <idle>       0      WFS  8,16     2064         4096      22.99
5141.650858     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41
5141.651276     <idle>       0      WFS  8,16     2064         4096      16.40
5141.667708     <idle>       0      FF   8,16     18446744073709551615 0          0.29
5141.668012     <idle>       0      WFS  8,16     2080         4096       7.95

Answer

Attribution
Source : Link , Question Author : Philipp Unterbrunner , Answer Author : Community

Leave a Comment