How does this benchmark calculation work?

I am testing disk I/O performance on a server of mine, which will eventually run Postgresql. I am following this web site to perform my benchmarks.

The benchmark consists of running dd and reading/writing N blocks the size of 8k (which is the size of blocks Postgresql uses). N is to be calculated as follows:

N = 250,000 * gigabytes of RAM 

So, I have 16GB of ram, which gives me 4 million blocks to read/write. That is fine, but…

I am unsure where the magical number of 250,000 comes from?

Answer

Edit: Corrected per Eviler_Elf:

That’s for converting between blocks and GB:

 1 GB / 8 kB/block * 2x = 250,000 blocks

Attribution
Source : Link , Question Author : Eviler_Elf , Answer Author : gsiems

Leave a Comment