Taking this discussion to another level.
I once tried to ask my networking professor this question. I don't beleive she knew the answer because she danced around the issue for 5 min talking about anything but what i asked about. /infopop/emoticons/icon_smile.gif Lets see if anyone else can do any better. The program that puts out an ICMP packet (ping) has a timer in it right? And when it gets a reply packet from the computer it was sent to, the program stops running, then figures out the amount of time it took to receive the response. This leaves one area open for dispute -> Does the timer start when the software initiates ping packet or when the packet actually leaves the computer?
NIC's and Ethernet (802.2) have built into them an algorithm used to avoid packet collision. If a NIC tries to send a packet, but the line is busy (perhaps another computer on the same hub as our above mentioned test machine was transmitting at the time), it waits a random amount of time then tries to transmit again. If the line is still busy, it bumps up the maximum amount of randomly available time to wait, then it waits again. This process goes on and on until a predefined wait time ceiling is reached, or the NIC can transmit it's packet. So, is the ping packet timer ticking while the NIC waits to transmit? Good question.
Furthermore, if you play and watch you 'F6' screen at the same time (Hey Stop looking at me like that! I really do watch it. /infopop/emoticons/icon_razz.gif ) you may notice that your ping jumps up when you die. (at least it does on my machine) So I am also lead to beleive that your ping depends upon how busy your processor is. I have run games of UT across a 100Mbit/sec Ethernet switch (by netgear. real nice model. Metal case, heavy power supply, neat blue color, and it goes ARG! ARG! ARG! when you turn it on.... er... um... back to the topic) using a Pentium 233 with 72 Meg ram in software rendering mode connected to an AMD K6-2 450 runing the UT server (IN LINUX! Yea baby Yea!) And the pings of the 233 were close to 100. Across the lan!? it must have something to do with total processor utilization.
I remember reading some article once, written by some guy, (pretty descriptive huh!?) about optimizing UT. And in this nameless article A correlation between frames per second in the lower resolutions (640x480) and the CAS setting (latency) of your system ram. He hypothesized (makes him seem smarter then saying 'he guessed') that the kernel of UT was just large enough to not fit completely in cache ram (unlike Q3). This required frequent access to System Ram. (and at the lower resolutions, your processor becomes the bottleneck, not your graphics card (i think). If your ram is CAS2, you will get slightly higher fps than if it was at CAS3 for the lower resolutions. So what I'm trying to say is that UT is 1.) processor intensive, and 2.) memory intensive (which adds wait states as the processor accesses ram). So when the ping packet is received these two may just keep the processor busy enough to add a few milliseconds to the timer before the timer can be stoped to calculate the response time of the ping. (And what I'm trying to say here may just apply to the server as well. Who knows how much time the ICMP packet spends inside the deepest darkest regions of the servers CPU before it is returned to the sending computer.)
In conclusion, I would like to wish the best of luck to the 2000 graduating class of SOB Institute of Technology... er, sorry wrong speech. Well, you get what I am trying to say. And to those of you who understand what I just tried to convey to you, I congratulate you. Now could you try to explaiin it to me? J/K
Peace Love & Valvoline
Greg /infopop/emoticons/icon_wink.gif
[This message was edited by turranx on Jun 14, 2000 at 22:17.]