1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Two Factor Authentication is now available on BeyondUnreal Forums. To configure it, visit your Profile and look for the "Two Step Verification" option on the left side. We can send codes via email (may be slower) or you can set up any TOTP Authenticator app on your phone (Authy, Google Authenticator, etc) to deliver codes. It is highly recommended that you configure this to keep your account safe.

UT3 + Ageia "PhysX" . Does Crysis have it too?

Discussion in 'Unreal Tournament 3' started by neilthecellist, Feb 15, 2008.

  1. neilthecellist

    neilthecellist Renegade.

    Joined:
    May 24, 2004
    Messages:
    2,306
    Likes Received:
    0
    Ok, I just learned that Crysis doesn't use any AGEIA PhysX. (Says on the official Crysis forum, took me a while to look for it on a dialup modem). If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it? And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?
     
  2. haslo

    haslo Moar Pie!

    Joined:
    Jan 21, 2008
    Messages:
    363
    Likes Received:
    0
    Aah, fond memories... pass-through cables... those were the days... it's really a shame graphics cards got more powerful, or I could still use it... I had to buy, like, a dozen or more new graphics cards since! All the money I wasted, when they could've just stopped developing graphics chips and features!

    As for Crysis, it would probably benefit from having PhysX support as well, considering how much physics simulation is going on there. The devs opted for not supporting a card that has an install base of way less than 1% though, and optimized CPU physics instead. Sensible decision in my opinion, maybe with a broader install base though Crysis will be patched to get PhysX support as well.

    As for the "only series 8" thing, it's because only the newest series has the newest features. In this particular case, it's a special chipset feature (CUDA, apparently, whatever that means) that can easily duplicate what the PhysX chip does, seemingly, particularly now that they have access to both all the patents and all the know-how from Ageia.

    No, you won't have access to the newest features with a less-than-newest card, ever. In fact, I'm mightily impressed by Nvidia porting PhysX to series 8, I'd have expected that only series 9 and onwards would benefit from the deal they made with Ageia.
     
    Last edited: Feb 17, 2008
  3. Grobut

    Grobut Комиссар Гробут

    Joined:
    Oct 27, 2004
    Messages:
    1,822
    Likes Received:
    0
    You can make all the advanced physics you want without PhysX, but the problem will allways be performance, since it is mainly your CPU that hass to pull this stuff, and it's allready got its hands full.

    The big idea here is taking all the physics stuff away from the CPU, and dumping it onto a processor that does nothing else, taking the load off your CPU and Vid-card, so they are free to do other things.
     
  4. Complete

    Complete New Member

    Joined:
    Feb 17, 2008
    Messages:
    2
    Likes Received:
    0
    Crysis delivers the best graphics and engine yet today. its still impossible to max out the rediculous system requirements... rumor goes that there is no machine yet (!) capable of running Crysis at its fullest.. Why would Crytek bring a game that no one can enjoy at its fullest glace... too bad :(
     
  5. Sahkolihaa

    Sahkolihaa Ow...

    Joined:
    Dec 29, 2004
    Messages:
    1,277
    Likes Received:
    0
    I have a feeling it's an attempt to get the hardware companies out of their little wars and think more about optimising their hardware and drivers.
     
  6. Complete

    Complete New Member

    Joined:
    Feb 17, 2008
    Messages:
    2
    Likes Received:
    0
    Well if thats the case, they have to do it quick cuz nobody fells to wait 3-5 years to get a decent system :eek:
     
  7. Jonathan

    Jonathan New Member

    Joined:
    Mar 19, 2006
    Messages:
    542
    Likes Received:
    0
    Crysis uses its own physics engine, and like other games, a lot of times you just make a proxy/collision mesh (in UnrealEd, you save the hidden collision mess as UCX_meshname and in CryEngine 2, you set it in the material properties in the 3DS Max Crytek exporter).

    I like the physics in Unreal Tournament 3 (when it comes to rag dolls) a lot better than Crysis, but I like the large amount of interactivity in Crysis (breaking buildings, etc.). Unreal Engine 3 can surely do it (as those Brothers in Arms: HH videos from Gearbox Software shows this, with the breakable fences, etc.) but I guess since in UT3 and other UE3 games, the light is usually baked in, so you can't always have everything be breakable.
     
  8. Phopojijo

    Phopojijo A Loose Screw

    Joined:
    Nov 13, 2005
    Messages:
    1,458
    Likes Received:
    0
    Actually I didn't encounter anyone who had that problem until you came around...

    Is it possible that you found people with similar issues as you *because* you had the issue and looked for people with your problem?

    Kinda a self-selected survey.

    You enjoy trolling anyone who doesn't see the same things as you or doesn't experience the same issues as you don't you?

    In closing -- play Battlefield 2... you'll enjoy UnrealEngine 1/2/3's stability.

    But yes -- I later found out the issue doesn't apply to Vista, only XP. So it might be more widespread than I experienced.

    CUDA is nVidia's programming language for GPUs.

    Videocards used to be fixed-function cards with dedicated functions -- this chip did texture assignment -- this chip did N-dot-L linear algebra lighting calculations... eventually people wanted the GPU to do more and more -- some materials in UnrealEngine3 alone ending up to be a few hundred shader instructions for a single pixel.

    nVidia decided "Hey -- if we're going to generalize the GPU... why don't we make a C-esque programming language to communicate with GPUs for anything we want?"

    And there came CUDA... sure it's mostly used for shader ops... but you can use it for Physics if you wanted to -- why not? It's generalized.

    I've even seen a videocard run a program to turn it -- basically -- into a soundcard. Fourier analysis and waveform blending, all while outputting the final waveform to the screen in a cool little oscilloscope thing.

    Crytek used their own method for physics (I believe based off Havok 3 or 4 -- but I dunno for sure)... it was a design tradeoff they did. More control yet more development time. You don't *need* middleware to make every videogame... Insomniac made a purpose to NOT license middleware and Resistance and Ratchet and Clank Future Tools of Destruction weren't failures by any stretch of the imagination. That being said -- look at all of UE3's satisfied customers?

    It's all in what you need, what you can accomplish, and what you want to spend time doing.
     
    Last edited: Feb 17, 2008
  9. Grogan

    Grogan New Member

    Joined:
    Jan 20, 2008
    Messages:
    105
    Likes Received:
    0
    I don't see the point of having the driver allow PhysX processing by the GPU. The GPU should be busy enough rendering graphics (my 8800GTS sure as heck is). What do we have overpowered multi core processors for? It would make more sense to use the CPU to process physics using the PhysX software library if there isn't going to be a dedicated physics processor on the card. I would have thought that with Nvidia's acquisition, they'd make provisions for that on future cards. Doesn't help current ones of course.

    I just have a Core2 duo E6600 and even it isn't maxed out by games. (I can install a Quad core processor on this motherboard if I need to at any moment... bring it on)
     
  10. kafros

    kafros F1 manta tryouts

    Joined:
    Jan 21, 2005
    Messages:
    331
    Likes Received:
    0
    PhysX is an API like DirectX. You don't NEED DirectX to display graphics. You can use an other API like Open-GL or go the very old hardcore way and create a separate rendering lib for every card (ouch). Since physics are not a big thing in games yet, you see many games having proprietary phys-engines (ID Doom3 for example, Crysis too from what you say - I could be wrong - see wiki)

    Physx is exclusive to 8 series, because they are the only ones that support an API for general (non-graphics) applications programming. See info below:

    Huang revealed that Nvidia's strategy is to take the PhysX engine and port it onto CUDA. For those not in the know, CUDA stands for Compute Unified Device Architecture, and it's a C-like application programming interface Nvidia developed to let programmers write general-purpose applications that can run on GPUs. All of Nvidia's existing GeForce 8 graphics processors already support CUDA, and Huang confirmed that the cards will be able to run PhysX.
     
  11. haslo

    haslo Moar Pie!

    Joined:
    Jan 21, 2008
    Messages:
    363
    Likes Received:
    0
    Usually it is, that's highly configureable of course and you can have an entire level without a single light map if you wish so, but the thing is that dynamic lights use a huge lot more processing power than light mapped (baked-in) ones, as the entire load of calculating lighting (at a higher quality, with way more lights) is processed at build-time.
     
  12. ambershee

    ambershee Nimbusfish Rawks

    Joined:
    Apr 18, 2006
    Messages:
    4,519
    Likes Received:
    7
    You can have different light maps on different objects, and thus when you break something, you can then drop the light map and light it dynamically.
     
  13. BobTheBeheader

    BobTheBeheader New Member

    Joined:
    Aug 31, 2005
    Messages:
    799
    Likes Received:
    0
    Two reasons, I'll bet. well, 3 tbh...

    1. The PhysX API/library is probably better suited to their needs. The package is designed in a way that happened to be more compatible with the engine.

    2. Either that or Epic liked AGEIA PhysX so much from the get-go that they decided to design portions of the game around it.

    3. They have a business deal, UT3 helps to promote AGEIA (theoretically speaking). That is to say, in layman's terms, they are in bed together.


    It could be that the specific hardware, the processors themselves, on the non-8000 series may not be physically able to accept the new software. They might not be wired in a way that could potentially support PhysX.
     
  14. ambershee

    ambershee Nimbusfish Rawks

    Joined:
    Apr 18, 2006
    Messages:
    4,519
    Likes Received:
    7
    Using Ageia was a way of implementing a strong, easy to employ physics engine for a low development cost. Win-win, really.

    Crysis has it's own proprietary physics simulation. Probably because their engine is set up too awkwardly to employ a more generic one.
     
  15. BobTheBeheader

    BobTheBeheader New Member

    Joined:
    Aug 31, 2005
    Messages:
    799
    Likes Received:
    0
    After reading that CUDA article, I'm beginning to wonder if CPUs and GPUs will one day be merged into a single unit. There certainly isn't anything wrong with putting dedicated linear algebra functions on a CPU, for instance, and I imagine it would come in handy some times.
     
  16. haslo

    haslo Moar Pie!

    Joined:
    Jan 21, 2008
    Messages:
    363
    Likes Received:
    0
    Right now, GPUs have faster access to dedicated memory, that's one defining difference. There's also plenty of algorithms that are hardwired into both GPUs and CPUs, both for their specific purposes (multimedia extensions and number crunching for CPUs, vector maths and matrix calculations for GPUs). This hardwiring then is done in ROM, and could theoretically all be done in EEPROM (which is flashable and reprogrammable). However, ROM is cheaper to produce than EEPROM, so with all the cost pressure it's somewhat improbable that they'll be merged and we'll only see one type of generic processor in the future, unless processors become so fast that those hardwired structures are no longer necessary (which I don't believe, it seems we always do need more processing power however much we do have). Both kinds of processes might be moved onto single chips if processor and graphics card manufacturers cooperate more tightly though. Of course, having dedicated processors for certain tasks has huge speed advantages as well.

    Technically feasible to have just one kind of multi-core xPU? Sure :) It's all binary operations anyway.
     
  17. ambershee

    ambershee Nimbusfish Rawks

    Joined:
    Apr 18, 2006
    Messages:
    4,519
    Likes Received:
    7
    Consumers into their hardware will like having them separately anyway. It's good to have control and to be able to switch things in and out :)
     
  18. BobTheBeheader

    BobTheBeheader New Member

    Joined:
    Aug 31, 2005
    Messages:
    799
    Likes Received:
    0
    Don't worry, I have several EEPROMs and a programmer sitting in a box under my staircase. ;)
     
  19. Phopojijo

    Phopojijo A Loose Screw

    Joined:
    Nov 13, 2005
    Messages:
    1,458
    Likes Received:
    0
    That's what we call "The Cell" or AMD's "Torrenza"/"Fusion"
     
  20. ambershee

    ambershee Nimbusfish Rawks

    Joined:
    Apr 18, 2006
    Messages:
    4,519
    Likes Received:
    7
    Cell is not a GPU/CPU hybrid, it's just a particular form of microprocessor architecture. It's genuinely not special. Torrenza is the same. Fusion is a future attempt at doing it however, having separate cores for GPU/CPU specific applications (but each core will not do both, and the GPU component will be very basic). Trouble is, it'll likely use a completely custom instruction set, giving it a very low adoption rate for software developers, and thus either a very low compatibility rate with software, or a lot of software that just won't use it properly.
     

Share This Page