- Posts: 14
- Thank you received: 0
latency / bandwidth optimization
19 years 9 months ago #7435
by carmatic
latency / bandwidth optimization was created by carmatic
hello
in a network connection, can latency be optimized at the cost of bandwidth, and vice versa?
in a network connection, can latency be optimized at the cost of bandwidth, and vice versa?
19 years 9 months ago #7439
by nske
Replied by nske on topic Re: latency / bandwidth optimization
I don't see how these two would be considered reverse sizes at the network level (I mean as a direct result of the way IP over ethernet works -since you are probably refering to that kind of networks).
On the contrary, while they do not depend exactly on the same factors, both sizes are related to each other towards the same direction. I.E. a sudden increase in latency (that could be a result of collisions for example, or inadequate system resources on one end) could limit the maximum utilize-able bandwidth between the involved parts. Similarly, too much traffic in relation to the capacity of the link would introduce higher latency to everyone.
Walking towards the higher OSI layers, things are less specific, more protocol implementations exist and in some of them we see features that could result in what the end user would sense as a "trade-off between bandwidth and latency".
Obviously, every protocol results on some overhead which could be interpreted as latency in many cases. Some protocols use or support compression to pass more data using less bandwidth. This saves bandwidth at the cost of delay (caused for compressing/decompressing), but in most cases it's not too much to be of a concern and you can't do much about it anyway. Whenever you have an option to choose whether to use data compression or not, compression implies some increase in latency at the benefit of bandwidth, but usually it is at your interest to use it.
An other technique that could be considered to provide decreased latency to the end user "at the cost of bandwidth", is pre-fetching/pre-caching (roughly, getting data before you request them so that they will be there when you do, if you do -but usually so will be other data that you will never do). So you can consider this a waste of bandwidth in favor of potentially decreased latency.
A technique that does not exactly exchange bandwidth for latency, but can be used to balance and guarantee some quality in both, is QoS. But that's an other chapter.
Hopefully, you get the idea: Bandwidth and latency are not reverse sizes from their nature. Using some specific techniques one can be given priority over the other, or be utilized to benefit the other, but definitelly their application is not something simple and for general use, and their effectiveness is quite limited and focused.
I think that is a specific as it can get, since we are talking in theory. If you have something specific in mind that you want to do, perhaps you'll get more specific replies if you specify it
On the contrary, while they do not depend exactly on the same factors, both sizes are related to each other towards the same direction. I.E. a sudden increase in latency (that could be a result of collisions for example, or inadequate system resources on one end) could limit the maximum utilize-able bandwidth between the involved parts. Similarly, too much traffic in relation to the capacity of the link would introduce higher latency to everyone.
Walking towards the higher OSI layers, things are less specific, more protocol implementations exist and in some of them we see features that could result in what the end user would sense as a "trade-off between bandwidth and latency".
Obviously, every protocol results on some overhead which could be interpreted as latency in many cases. Some protocols use or support compression to pass more data using less bandwidth. This saves bandwidth at the cost of delay (caused for compressing/decompressing), but in most cases it's not too much to be of a concern and you can't do much about it anyway. Whenever you have an option to choose whether to use data compression or not, compression implies some increase in latency at the benefit of bandwidth, but usually it is at your interest to use it.
An other technique that could be considered to provide decreased latency to the end user "at the cost of bandwidth", is pre-fetching/pre-caching (roughly, getting data before you request them so that they will be there when you do, if you do -but usually so will be other data that you will never do). So you can consider this a waste of bandwidth in favor of potentially decreased latency.
A technique that does not exactly exchange bandwidth for latency, but can be used to balance and guarantee some quality in both, is QoS. But that's an other chapter.
Hopefully, you get the idea: Bandwidth and latency are not reverse sizes from their nature. Using some specific techniques one can be given priority over the other, or be utilized to benefit the other, but definitelly their application is not something simple and for general use, and their effectiveness is quite limited and focused.
I think that is a specific as it can get, since we are talking in theory. If you have something specific in mind that you want to do, perhaps you'll get more specific replies if you specify it
Time to create page: 0.128 seconds