View Full Version : very STRANGE FPS-behaviour !

1st Dec 1999, 04:31 PM
i dont know if that is a problem, but it is strange somehow .. i've run the UTbench.dem on my machine (Athlon 500 / gigabyte 7IX / 128MB generic PC100 / TNT1 PCI (109 CORE|116 MEM)) and here are the results (16-bit / high /high):

320x240 : 26,93 FPS
512x384 : 25,67 FPS
640x480 : 25,75 FPS
800x600 : 25,81 FPS
102x768 : 19,41 FPS

So how can it be that 320x240 and 800x600 are nearly identical ... Unreal didnt recognize my tnt1 so i just forced him to use D3D, openGL is even more worse (hd swapping etc.). SO what is the limiting factor here ? The Athlon 500 or the TNT1 ? I just dont know, i have never seen sth. like this before ! plz help me with a good advise

1st Dec 1999, 06:53 PM
Before you get to 1024x768, the limiter is your video card. The first value you list is either random experimental error or some kind of cache effect. The difference is only 4%, so it doesn't matter anyway.

Once you hit 1024x768, you are limited by your CPU. All of these numbers would improve if you knock down the quality levels a bit.

You might want to make sure that you are running the latest Athlon version of the nVidia drivers; the numbers seem a little low. Do you have detail textures turned on in the advanced options?

1st Dec 1999, 07:11 PM
thx for your help so far.

I do use the 2.17 detonator-drivers at the mom, but i'm downloading the 3.62-drivers right now. I have everything set up to high (details high and skin high). I will set them both to medium, and i 've read some tweaks for UT (cache-size etc.), i will give em a try. I will post the results here.

1st Dec 1999, 08:46 PM
ok i have installed the 3.62 drivers and set both the world and skin detail to medium. I changed the CacheSize in the Advanced Options from 4 to 8, dunno if this is good or not, maybe someone knows some good tweaks for the Advanced Options. Ok here are the results using the Utbench.dem (16-bit/med/med):

Res. : min|average|max

640x480: 16.03|25.61|36.97
800x600: 15.47|25.53|36.97
102x768: 14.85|23.99|35.46

No performance gain in 640x480 and 800x600 but a noticeable performance gain in 1024x768, i dont know if these are the new drivers or the medium detial textures. All three resolutions get nearly identical FPS now. So it seems the graphicscard is the limiting factor and not the cpu, i'm just guessing cause i'm getting more and more confused. I recorded another demo which is not as much hardware-hungry as utbench. I used the KGalleon-map. Here are the results:

640x480: 25.35|41.60|63.74
800x600: 23.88|38.37|54.86
102x768: 17.95|33.04|54.33

The difference between the resolutions is higher than in the UTBench.dem, that indicates IMO that the graphicscard is the limiting factor in high resolutions. It must be the TNT1 cause I cant believe that a TNT1 PCI is too fast for a athlon500, that sounds like bulls. to me.

Plz help me with a good advise or just an opinion on that one.

[This message has been edited by cobold (edited 12-01-1999).]

1st Dec 1999, 10:23 PM
Ok, see how your framerate increased at high resolutions when you turned detail down to medium? That is because the CPU has to send less data down the geometry pipeline. It has fewer pixels to light, move around the screen, and cull (remove non-visible pixels).

The ratios of max/average and average/min stayed about the same between your experiments. The numbers have all been shifted up the graph by the lower levels of scene complexity.

The results are kind of curious. I think this is an example of your graphic card sometimes being limited by its max fillrate in one case and limited by its texturing rate in the other.

Anyway, the simple answer is that you need a faster graphics card to increase your framerates. The card only generates one textel per clock (I think), and the clock is only 109MHz.

2nd Dec 1999, 04:18 PM
thx again.

I have more numbers to bother you with /~unreal/ubb/html/smile.gif
I added the 32-bit-benchmarks for UTBench (using med/med again) and here are the results:

640x480: 10.36|23.12|36.21
800x600: 8.49|19.61|35.84
102x768: 5.74|11.07|26.10

In 640x480 there is not much difference between 16bit and 32bit execept the max-FPS!
The difference gets larger in 800x600 and 1024x768. In 32-bit-mode does the graphicscard or the CPU get more work to do ?

And i have made another Benchmark using the DM-Stalwart-map. Here are the results (med/med):

640x480: 22.42|40.35|54.51
800x600: 22.55|39.51|54.92
102x768: 18.96|32.46|48.58

640x480: 12.37|33.53|54.53
800x600: 8.98|26.84|46.97
102x768: 5.29|14.31|32.72

I really appreciate your help so maybe you could give me a final comment on this, maybe sth. like a sum up.