Forcing PhysX on NON-GeForce 8 cards

  • Two Factor Authentication is now available on BeyondUnreal Forums. To configure it, visit your Profile and look for the "Two Step Verification" option on the left side. We can send codes via email (may be slower) or you can set up any TOTP Authenticator app on your phone (Authy, Google Authenticator, etc) to deliver codes. It is highly recommended that you configure this to keep your account safe.

neilthecellist

Renegade.
May 24, 2004
2,306
0
0
San Diego, California
www. .
If you google up PhysX comparisons or something (I did this many months ago) you'll find some comparisons in performance between using a PhysX card versus relying solely on the CPU. The gist that I got was that a CPU could accomplish the same as an AGEIA PhysX card.

That said, an AGEIA PhysX card is arguably obsolete as the GeForce 8 cards now have that CUDA support thing.


If someone can find out how to run all heavy PhysX on the CPU then I would like to see for myself and compare the results. For instance, I can't play CTF-Lighthouse when everything falls apart. Perhaps if I lower all the detail settings, have to try that too. For those who doesn't know, I'm running a Quad Core processor with a GeForce 9800GTX+ and it still won't cut it on that particular map. Maybe 2GB of RAM ain't enough, but that's almost silly.

I noticed that in other users too. One of my friends is on a GeForce 8800GTS, has the latest CUDA drivers and he says CTF Lighthouse craps out. LegoLand (custom map) barfs out. He had to replace his nvcuda.dll file with the same DLL file I got from the internet (which can just be easily obtained by expanding via command prompt out of \system32) to get acceptable performance.
 

flapjackboy

New Member
Aug 13, 2008
9
0
0
This forum newcomer sure sounds like a nVidia fan, shareholder, employee or official.

Nope, I'm platform agnostic when it comes to graphics cards.

If nVidia wanted to keep PhysX completely to themselves they really shouldn't release source codes and development kits the way they have done.

Actually, nVidia have stated that they don't have a problem with people porting the PhysX drivers over to other platforms, hence their releasing the dev kits.
 

Severin

New Member
Feb 8, 2008
199
0
0
I see. Thanks for the tips. But how is it that older PhysX drivers (previously owned by AGEIA) worked fine with UT3 without crashes, whereas the new "CUDA" nvidia drivers are suddenly causing marked instability? Is it again, because the DLL's are searching for something that I don't have on my GeForce 7 card that only a G-8 would have?

Also, I noticed that on some other discussion boards that some other people don't get crashes PERIOD on their cards, and some of them are on GeForce 7 or even older ATI's that don't support that alternate-not-CUDA-but-like-CUDA-somewhat feature.

I would have thought the best bet would be to run the standard 175.xx drivers along with the Ageia drivers supplied with UT3 not the 177.xx. As you say they are aimed at GF8 and above. I don't have UT3 installed anymore so cannot test Lighthouse but i didn't have any problems with Legoland in software mode with pre-physics enabled drivers on my 8800gts when I did.

As for what your saying about the older cards. If they don't support CUDA or the (Nvidia aided) ATI hack version then the card is only a display device so will have no effect on the stability or speed of the physics in game. (hacked together Dlls probably will effect stability)

I don't follow Amd/Ati cards very closely but i think anything older than a 1900 series card falls into the display device category.

Just as background:

Geforce 5-7 cards contained small circuits known as pixel shaders and vertex shaders they did a fixed job and that was that. In Geforce 8 and above Nvidia swapped to a different approach which they called stream processors these processors could be made to work as both pixel and vertex shaders as well as being able to be exploited for other purposes such as physics via the CUDA programming language. Ati started to take a similar approach around the 1900 series of cards.
 
Last edited:

Crowze

Bird Brain
Feb 6, 2002
3,556
1
38
40
Cambridgeshire, UK
www.dan-roberts.co.uk
Not exactly. The Geforce 2 had non-programmable pixel shader units which no-one ever used, but the Geforce 3 and newer had programmable shader units, albeit to a fairly limited degree. Stream processors came about in the Geforce 8/ATI HD 2 series to support Direct3D 10. Technically I don't see why PhysX can't be run on any card supporting Dx9c - as has been said earlier it's quite possible with ATI X1-series cards, but the benefits are questionable given that for modern apps those cards will have enough graphics processing on their plate, so to speak.
 

Fuzz

Enigma
Jan 19, 2008
1,120
0
0
Universe
GeForce 8400GS have a 16:8:4 core configuration. GeForce 7900GS have 7:20:20:16. There are apparenly more processors there, but if they are only enabled to do graphic calculations they would be useless for other purposes. GeForce 7 cards have processors that are task specific, they can only do one of the following things, specialised so to say, vertex, pixel, texture and output. GeForce 8 have unified shader that can do any of the tasks among others.

One day PhysX and Havok might be included in DirectX. Cuda and CTM platforms ready to program any supported ATI or nVidia cards, AMD or Intel processors. SLI/CF available for any combination of cards. For examble run Cuda on a GFGTX260, a GF9800GTX+ and a HD 4870 plugged in on three PCIe 2.0 slots, sharing resources rendering graphics and processing PhysX or Havok together at the same time.

It would be sweet if you could run the entire OS on a GPU, one of those GeForce 350 2GB cards. I wonder what kind of diabolic games a GPU like that can run anyway. At least they are making progress.
 

neilthecellist

Renegade.
May 24, 2004
2,306
0
0
San Diego, California
www. .
Not exactly. The Geforce 2 had non-programmable pixel shader units which no-one ever used, but the Geforce 3 and newer had programmable shader units, albeit to a fairly limited degree. Stream processors came about in the Geforce 8/ATI HD 2 series to support Direct3D 10. Technically I don't see why PhysX can't be run on any card supporting Dx9c - as has been said earlier it's quite possible with ATI X1-series cards, but the benefits are questionable given that for modern apps those cards will have enough graphics processing on their plate, so to speak.

I'm with you. The ATI cards that somehow were hacked out to support PhysX were made WITHOUT "Ageia" or "PhysX" when they were being designed by the tech dudes at ATI.

Hell, even the guys at Nvidia created all the GeForce 8 cards without the thought of AGEIA at the time (AGEIA was its own company back then). Then all of a sudden, only GeForce 8 cards supported PhysX? All of a sudden, I no longer get PhysX capabilities through CPU without replacing a DLL file? It's very confusing to me.
 

flapjackboy

New Member
Aug 13, 2008
9
0
0
OK, it's not possible to run the hacked PhysX drivers on any ATI cards below a HD 2xxx series either, because anything prior to the R600 chipset designs did not have a unified shader architecture.

I'm going to say this one more time.

PhysX works in hardware mode when the following criteria are met:

1: You have an Ageia PhysX card

2: You have a GeForce 8 series or later graphics card

3: You have a Radeon HD 2 series or later card and a copy of the hacked PhysX drivers

Anything else and you're running in software mode only because previous cards do not support GPGPU. (General purpose computing on Graphical Processing Units)

Anyone who claims to have gotten hardware PhysX acceleration running on anything less than a GF8/HD 2 series card is talking out of their backside.

nVidia developed the GF8 cards so that they didn't have discrete vertex and pixel shaders, but to instead have flexible "stream processors" that could be programmed to perform either task depending on need, or indeed perform any task which requires a high throughput of data, such as processing physics calculations.

ATI also developed their similar Close To Metal technology around the same time.

This is why previous generations of cards cannot be made to do hardware PhysX processing, because their architecture does not have the general purpose design of the GF8/HD 2 cards.

So, to recap...

It is NOT physically possible to run hardware PhysX on anything less than a GF8/HD 2 card because the hardware does not support it.

EDIT: correction, X1900 series cards can run CTM, but would still not be capable of running PhysX because they do not have the stream processors of the later chipsets.
 
Last edited:

Severin

New Member
Feb 8, 2008
199
0
0
Not exactly. ... Technically I don't see why PhysX can't be run on any card supporting Dx9c - as has been said earlier it's quite possible with ATI X1-series cards, but the benefits are questionable given that for modern apps those cards will have enough graphics processing on their plate, so to speak.


I don't see how your going to get a fixed function processor to compute general purpose code. Some links would be appreciated.

Yes vertex/pixel shaders can be programmed but only for pixel shading and vertex manipulation as far as I'm aware.

I agree that the ATI cards from 1xxx can at least be programmed to some extent. As I remember them doing folding at home but Nvidia cards of the same generation were considered incapable of such a thing.

neilthecellist:

earlier it's quite possible with ATI X1-series cards, but the benefits are questionable given that for modern apps those cards will have enough graphics processing on their plate, so to speak.
I'm with you. The ATI cards that somehow were hacked out to support PhysX were made WITHOUT "Ageia" or "PhysX" when they were being designed by the tech dudes at ATI.

Hell, even the guys at Nvidia created all the GeForce 8 cards without the thought of AGEIA at the time (AGEIA was its own company back then). Then all of a sudden, only GeForce 8 cards supported PhysX? All of a sudden, I no longer get PhysX capabilities through CPU without replacing a DLL file? It's very confusing to me.
__________________

ATI Cards can do physics they just can't run CUDA and Ageia physX as this is Nvidia tech not AMD tech. AMD are or will be doing something with Havok. If AMD/ATI were to use PhysX then they would need to pay their rival for the privilege. Nvidia helped out a bunch of guys writing a port of CUDA/PhysX (can't remember which) to AMD cards but its not an official version and not sanctioned by AMD.

Again the cards that can support physics have more generally programmable hardware so they can run 'programs' to do things other than produce graphics, those that can't are not programmable in a way that is useful for general tasks like computing physics etc. It does not matter what hacked software or drivers you use if the card can not do what is being asked of it.

Your trying to state that your car has a jet engine and wings and that you can make it fly. Despite the fact it has no jet engine or wings and cannot fly.
One final thought, if the seven and below series of cards were capable of physics why hasn't Nvidia enabled it ? especially when they are happy to enable it on their competitors cards. (that are capable)
 
Last edited:

flapjackboy

New Member
Aug 13, 2008
9
0
0
neilthecellist:

Look at it this way.


The way factory production lines used to work is that each individual stage in the production line had a worker trained to perform that stage of the assembly. This had the advantage of having each worker excelling at their particular task, but it didn't make for a very flexible workforce because nobody knew how to do another worker's task and the unions prevented cross-skilling to protect each individual worker's job.

That's an analogy for how the old GPU architecture worked. The pixel shaders were only able to process pixel shading instructions, the vertex shaders could only process vertex calculations and neither could do any other form of computation.

Compare that to a modern factory environment where every worker is given equal training on all areas of the production line so that they can fill in for any workers who are off ill, or double up on sections that are backlogged.

That is how modern GPUs work. The stream processors are general purpose processing units, able to perform a variety of functions as the demand arises, whether that be pixel shading, vertex shading, texture mapping or physics calculations.