Can a large number of network hardware interrupts cause a machine to be underutilized?

I don’t know much about networking sorry.
I’m building a client / server where the server streams a lot of data to the client and I’m trying to maximise throughput (through TCP).

If I run both client and server I get pretty high throughput. However, when going over the network card (the 2 machines are located on a LAN), the performance drops a lot. But the thing is, the bandwidth is not fully utilized, neither are the CPU cores.

I was wondering if you think it could be the large number of hardware interrupt (several thousands per second) that could cause the resources to be underutilized. The client performs some processing on the incoming stream so I would like to fully utilise the CPU and/or bandwidth.

How would you go about diagnosing and improving that situation?
Does the size of the messages sent by the server impact the number of interrupts?

Answer

Unfortunately that’s a bit vague to start with. Define “performance drops a lot”.
This is one of those situation wheres there’s about 20 different things to check, both hardware and software.

Hardware:
– do you have a good decent nic on both ends? (intel/broadcom), not realtek
– do you have a managed switch?
– Is the managed switch maybe unavailable to do all the process switching if small packets?
– have you tried swapping network cables
– confirmed it’s gigabit?
– perhaps your harddisks can’t keep up with the data streams? generic basic harddrives can be maxed out on gigabit.
– do have a network card that does TOE?

Software:
– are you using mtu 9000 aka jumbo buffers in card?
– have you looked at tuning receive windows or buffers?
– What OS?
– If windows, do you have av/firewall software running?

Applicationlevel/Data:
– is it being encrypted? (tunneled over ssh?)
– What protocol? ftp/cifs/rsync/http/nfs
– what are the size of files? thousands of small files or one really large one?

There’s so many places to start, but those are some questions you need to answer to yourself.

Once you get to the software level, I would recommend using iperf between the 2 machines and seeing what the maximum of raw data you get. That will tell you the highest possible speeds. Then you can compare it to what your application is giving you.

Attribution
Source : Link , Question Author : Clement , Answer Author : DuPie

Leave a Comment