1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Two Factor Authentication is now available on BeyondUnreal Forums. To configure it, visit your Profile and look for the "Two Step Verification" option on the left side. We can send codes via email (may be slower) or you can set up any TOTP Authenticator app on your phone (Authy, Google Authenticator, etc) to deliver codes. It is highly recommended that you configure this to keep your account safe.

Forcing PhysX on NON-GeForce 8 cards

Discussion in 'Unreal Tournament 3' started by neilthecellist, Sep 20, 2008.

  1. neilthecellist

    neilthecellist Renegade.

    Joined:
    May 24, 2004
    Messages:
    2,306
    Likes Received:
    0
    If you google up PhysX comparisons or something (I did this many months ago) you'll find some comparisons in performance between using a PhysX card versus relying solely on the CPU. The gist that I got was that a CPU could accomplish the same as an AGEIA PhysX card.

    That said, an AGEIA PhysX card is arguably obsolete as the GeForce 8 cards now have that CUDA support thing.


    I noticed that in other users too. One of my friends is on a GeForce 8800GTS, has the latest CUDA drivers and he says CTF Lighthouse craps out. LegoLand (custom map) barfs out. He had to replace his nvcuda.dll file with the same DLL file I got from the internet (which can just be easily obtained by expanding via command prompt out of \system32) to get acceptable performance.
     
  2. flapjackboy

    flapjackboy New Member

    Joined:
    Aug 13, 2008
    Messages:
    9
    Likes Received:
    0
    Nope, I'm platform agnostic when it comes to graphics cards.

    Actually, nVidia have stated that they don't have a problem with people porting the PhysX drivers over to other platforms, hence their releasing the dev kits.
     
  3. Severin

    Severin New Member

    Joined:
    Feb 8, 2008
    Messages:
    199
    Likes Received:
    0
    I would have thought the best bet would be to run the standard 175.xx drivers along with the Ageia drivers supplied with UT3 not the 177.xx. As you say they are aimed at GF8 and above. I don't have UT3 installed anymore so cannot test Lighthouse but i didn't have any problems with Legoland in software mode with pre-physics enabled drivers on my 8800gts when I did.

    As for what your saying about the older cards. If they don't support CUDA or the (Nvidia aided) ATI hack version then the card is only a display device so will have no effect on the stability or speed of the physics in game. (hacked together Dlls probably will effect stability)

    I don't follow Amd/Ati cards very closely but i think anything older than a 1900 series card falls into the display device category.

    Just as background:

    Geforce 5-7 cards contained small circuits known as pixel shaders and vertex shaders they did a fixed job and that was that. In Geforce 8 and above Nvidia swapped to a different approach which they called stream processors these processors could be made to work as both pixel and vertex shaders as well as being able to be exploited for other purposes such as physics via the CUDA programming language. Ati started to take a similar approach around the 1900 series of cards.
     
    Last edited: Sep 22, 2008
  4. Crowze

    Crowze Bird Brain

    Joined:
    Feb 6, 2002
    Messages:
    3,556
    Likes Received:
    1
    Not exactly. The Geforce 2 had non-programmable pixel shader units which no-one ever used, but the Geforce 3 and newer had programmable shader units, albeit to a fairly limited degree. Stream processors came about in the Geforce 8/ATI HD 2 series to support Direct3D 10. Technically I don't see why PhysX can't be run on any card supporting Dx9c - as has been said earlier it's quite possible with ATI X1-series cards, but the benefits are questionable given that for modern apps those cards will have enough graphics processing on their plate, so to speak.
     
  5. Fuzz

    Fuzz Enigma

    Joined:
    Jan 19, 2008
    Messages:
    1,120
    Likes Received:
    0
    GeForce 8400GS have a 16:8:4 core configuration. GeForce 7900GS have 7:20:20:16. There are apparenly more processors there, but if they are only enabled to do graphic calculations they would be useless for other purposes. GeForce 7 cards have processors that are task specific, they can only do one of the following things, specialised so to say, vertex, pixel, texture and output. GeForce 8 have unified shader that can do any of the tasks among others.

    One day PhysX and Havok might be included in DirectX. Cuda and CTM platforms ready to program any supported ATI or nVidia cards, AMD or Intel processors. SLI/CF available for any combination of cards. For examble run Cuda on a GFGTX260, a GF9800GTX+ and a HD 4870 plugged in on three PCIe 2.0 slots, sharing resources rendering graphics and processing PhysX or Havok together at the same time.

    It would be sweet if you could run the entire OS on a GPU, one of those GeForce 350 2GB cards. I wonder what kind of diabolic games a GPU like that can run anyway. At least they are making progress.
     
  6. neilthecellist

    neilthecellist Renegade.

    Joined:
    May 24, 2004
    Messages:
    2,306
    Likes Received:
    0
    I'm with you. The ATI cards that somehow were hacked out to support PhysX were made WITHOUT "Ageia" or "PhysX" when they were being designed by the tech dudes at ATI.

    Hell, even the guys at Nvidia created all the GeForce 8 cards without the thought of AGEIA at the time (AGEIA was its own company back then). Then all of a sudden, only GeForce 8 cards supported PhysX? All of a sudden, I no longer get PhysX capabilities through CPU without replacing a DLL file? It's very confusing to me.
     
  7. flapjackboy

    flapjackboy New Member

    Joined:
    Aug 13, 2008
    Messages:
    9
    Likes Received:
    0
    OK, it's not possible to run the hacked PhysX drivers on any ATI cards below a HD 2xxx series either, because anything prior to the R600 chipset designs did not have a unified shader architecture.

    I'm going to say this one more time.

    PhysX works in hardware mode when the following criteria are met:

    1: You have an Ageia PhysX card

    2: You have a GeForce 8 series or later graphics card

    3: You have a Radeon HD 2 series or later card and a copy of the hacked PhysX drivers

    Anything else and you're running in software mode only because previous cards do not support GPGPU. (General purpose computing on Graphical Processing Units)

    Anyone who claims to have gotten hardware PhysX acceleration running on anything less than a GF8/HD 2 series card is talking out of their backside.

    nVidia developed the GF8 cards so that they didn't have discrete vertex and pixel shaders, but to instead have flexible "stream processors" that could be programmed to perform either task depending on need, or indeed perform any task which requires a high throughput of data, such as processing physics calculations.

    ATI also developed their similar Close To Metal technology around the same time.

    This is why previous generations of cards cannot be made to do hardware PhysX processing, because their architecture does not have the general purpose design of the GF8/HD 2 cards.

    So, to recap...

    It is NOT physically possible to run hardware PhysX on anything less than a GF8/HD 2 card because the hardware does not support it.

    EDIT: correction, X1900 series cards can run CTM, but would still not be capable of running PhysX because they do not have the stream processors of the later chipsets.
     
    Last edited: Sep 23, 2008
  8. Severin

    Severin New Member

    Joined:
    Feb 8, 2008
    Messages:
    199
    Likes Received:
    0

    I don't see how your going to get a fixed function processor to compute general purpose code. Some links would be appreciated.

    Yes vertex/pixel shaders can be programmed but only for pixel shading and vertex manipulation as far as I'm aware.

    I agree that the ATI cards from 1xxx can at least be programmed to some extent. As I remember them doing folding at home but Nvidia cards of the same generation were considered incapable of such a thing.

    neilthecellist:

    ATI Cards can do physics they just can't run CUDA and Ageia physX as this is Nvidia tech not AMD tech. AMD are or will be doing something with Havok. If AMD/ATI were to use PhysX then they would need to pay their rival for the privilege. Nvidia helped out a bunch of guys writing a port of CUDA/PhysX (can't remember which) to AMD cards but its not an official version and not sanctioned by AMD.

    Again the cards that can support physics have more generally programmable hardware so they can run 'programs' to do things other than produce graphics, those that can't are not programmable in a way that is useful for general tasks like computing physics etc. It does not matter what hacked software or drivers you use if the card can not do what is being asked of it.

    Your trying to state that your car has a jet engine and wings and that you can make it fly. Despite the fact it has no jet engine or wings and cannot fly.
    One final thought, if the seven and below series of cards were capable of physics why hasn't Nvidia enabled it ? especially when they are happy to enable it on their competitors cards. (that are capable)
     
    Last edited: Sep 23, 2008
  9. flapjackboy

    flapjackboy New Member

    Joined:
    Aug 13, 2008
    Messages:
    9
    Likes Received:
    0
    neilthecellist:

    Look at it this way.


    The way factory production lines used to work is that each individual stage in the production line had a worker trained to perform that stage of the assembly. This had the advantage of having each worker excelling at their particular task, but it didn't make for a very flexible workforce because nobody knew how to do another worker's task and the unions prevented cross-skilling to protect each individual worker's job.

    That's an analogy for how the old GPU architecture worked. The pixel shaders were only able to process pixel shading instructions, the vertex shaders could only process vertex calculations and neither could do any other form of computation.

    Compare that to a modern factory environment where every worker is given equal training on all areas of the production line so that they can fill in for any workers who are off ill, or double up on sections that are backlogged.

    That is how modern GPUs work. The stream processors are general purpose processing units, able to perform a variety of functions as the demand arises, whether that be pixel shading, vertex shading, texture mapping or physics calculations.
     
  10. Crowze

    Crowze Bird Brain

    Joined:
    Feb 6, 2002
    Messages:
    3,556
    Likes Received:
    1
    There are hundreds of tutorials on programmable graphics hardware, they're not hard to find, but the wikipedia article on HLSL has a nice comparison of each versions features and limitations. The best example I can think of to back up my point is Folding@home, who released a client that runs on ATI X1 cards or newer. Their GPU FAQ is quite an interesting read.
     
  11. neilthecellist

    neilthecellist Renegade.

    Joined:
    May 24, 2004
    Messages:
    2,306
    Likes Received:
    0
    Thanks Crowze and Severin.
     

Share This Page