UT3 + Ageia "PhysX" . Does Crysis have it too?

  • Two Factor Authentication is now available on BeyondUnreal Forums. To configure it, visit your Profile and look for the "Two Step Verification" option on the left side. We can send codes via email (may be slower) or you can set up any TOTP Authenticator app on your phone (Authy, Google Authenticator, etc) to deliver codes. It is highly recommended that you configure this to keep your account safe.

neilthecellist

Renegade.
May 24, 2004
2,306
0
0
San Diego, California
www. .
Ok, I just learned that Crysis doesn't use any AGEIA PhysX. (Says on the official Crysis forum, took me a while to look for it on a dialup modem). If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it? And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?
 

haslo

Moar Pie!
Jan 21, 2008
363
0
0
Bern CH
www.haslo.ch
And I have the feeling I bought my 3Dfx Voodoo 2 for nothing.:(

Aah, fond memories... pass-through cables... those were the days... it's really a shame graphics cards got more powerful, or I could still use it... I had to buy, like, a dozen or more new graphics cards since! All the money I wasted, when they could've just stopped developing graphics chips and features!

Ok, I just learned that Crysis doesn't use any AGEIA PhysX. (Says on the official Crysis forum, took me a while to look for it on a dialup modem). If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it? And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?

As for Crysis, it would probably benefit from having PhysX support as well, considering how much physics simulation is going on there. The devs opted for not supporting a card that has an install base of way less than 1% though, and optimized CPU physics instead. Sensible decision in my opinion, maybe with a broader install base though Crysis will be patched to get PhysX support as well.

As for the "only series 8" thing, it's because only the newest series has the newest features. In this particular case, it's a special chipset feature (CUDA, apparently, whatever that means) that can easily duplicate what the PhysX chip does, seemingly, particularly now that they have access to both all the patents and all the know-how from Ageia.

No, you won't have access to the newest features with a less-than-newest card, ever. In fact, I'm mightily impressed by Nvidia porting PhysX to series 8, I'd have expected that only series 9 and onwards would benefit from the deal they made with Ageia.
 
Last edited:

Grobut

Комиссар Гробут
Oct 27, 2004
1,822
0
0
Soviet Denmark
Ok, I just learned that Crysis doesn't use any AGEIA PhysX. (Says on the official Crysis forum, took me a while to look for it on a dialup modem). If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it? And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?

You can make all the advanced physics you want without PhysX, but the problem will allways be performance, since it is mainly your CPU that hass to pull this stuff, and it's allready got its hands full.

The big idea here is taking all the physics stuff away from the CPU, and dumping it onto a processor that does nothing else, taking the load off your CPU and Vid-card, so they are free to do other things.
 

Complete

New Member
Feb 17, 2008
2
0
0
Crysis delivers the best graphics and engine yet today. its still impossible to max out the rediculous system requirements... rumor goes that there is no machine yet (!) capable of running Crysis at its fullest.. Why would Crytek bring a game that no one can enjoy at its fullest glace... too bad :(
 

Complete

New Member
Feb 17, 2008
2
0
0
Well if thats the case, they have to do it quick cuz nobody fells to wait 3-5 years to get a decent system :eek:
 

Jonathan

New Member
Mar 19, 2006
542
0
0
Crysis uses its own physics engine, and like other games, a lot of times you just make a proxy/collision mesh (in UnrealEd, you save the hidden collision mess as UCX_meshname and in CryEngine 2, you set it in the material properties in the 3DS Max Crytek exporter).

I like the physics in Unreal Tournament 3 (when it comes to rag dolls) a lot better than Crysis, but I like the large amount of interactivity in Crysis (breaking buildings, etc.). Unreal Engine 3 can surely do it (as those Brothers in Arms: HH videos from Gearbox Software shows this, with the breakable fences, etc.) but I guess since in UT3 and other UE3 games, the light is usually baked in, so you can't always have everything be breakable.
 

Phopojijo

A Loose Screw
Nov 13, 2005
1,458
0
0
37
Canada
Yeah, sucks how "specific setups" covers such a wide number of systems in this case, huh?
Actually I didn't encounter anyone who had that problem until you came around...

Is it possible that you found people with similar issues as you *because* you had the issue and looked for people with your problem?

Kinda a self-selected survey.

You enjoy trolling anyone who doesn't see the same things as you or doesn't experience the same issues as you don't you?

In closing -- play Battlefield 2... you'll enjoy UnrealEngine 1/2/3's stability.

But yes -- I later found out the issue doesn't apply to Vista, only XP. So it might be more widespread than I experienced.

As for Crysis, it would probably benefit from having PhysX support as well, considering how much physics simulation is going on there. The devs opted for not supporting a card that has an install base of way less than 1% though, and optimized CPU physics instead. Sensible decision in my opinion, maybe with a broader install base though Crysis will be patched to get PhysX support as well.

As for the "only series 8" thing, it's because only the newest series has the newest features. In this particular case, it's a special chipset feature (CUDA, apparently, whatever that means) that can easily duplicate what the PhysX chip does, seemingly, particularly now that they have access to both all the patents and all the know-how from Ageia.

No, you won't have access to the newest features with a less-than-newest card, ever. In fact, I'm mightily impressed by Nvidia porting PhysX to series 8, I'd have expected that only series 9 and onwards would benefit from the deal they made with Ageia.
CUDA is nVidia's programming language for GPUs.

Videocards used to be fixed-function cards with dedicated functions -- this chip did texture assignment -- this chip did N-dot-L linear algebra lighting calculations... eventually people wanted the GPU to do more and more -- some materials in UnrealEngine3 alone ending up to be a few hundred shader instructions for a single pixel.

nVidia decided "Hey -- if we're going to generalize the GPU... why don't we make a C-esque programming language to communicate with GPUs for anything we want?"

And there came CUDA... sure it's mostly used for shader ops... but you can use it for Physics if you wanted to -- why not? It's generalized.

I've even seen a videocard run a program to turn it -- basically -- into a soundcard. Fourier analysis and waveform blending, all while outputting the final waveform to the screen in a cool little oscilloscope thing.

Crytek used their own method for physics (I believe based off Havok 3 or 4 -- but I dunno for sure)... it was a design tradeoff they did. More control yet more development time. You don't *need* middleware to make every videogame... Insomniac made a purpose to NOT license middleware and Resistance and Ratchet and Clank Future Tools of Destruction weren't failures by any stretch of the imagination. That being said -- look at all of UE3's satisfied customers?

It's all in what you need, what you can accomplish, and what you want to spend time doing.
 
Last edited:

Grogan

New Member
I don't see the point of having the driver allow PhysX processing by the GPU. The GPU should be busy enough rendering graphics (my 8800GTS sure as heck is). What do we have overpowered multi core processors for? It would make more sense to use the CPU to process physics using the PhysX software library if there isn't going to be a dedicated physics processor on the card. I would have thought that with Nvidia's acquisition, they'd make provisions for that on future cards. Doesn't help current ones of course.

I just have a Core2 duo E6600 and even it isn't maxed out by games. (I can install a Quad core processor on this motherboard if I need to at any moment... bring it on)
 

kafros

F1 manta tryouts
Jan 21, 2005
331
0
0
49
Under Articstronghold's bridge
Ok, I just learned that Crysis doesn't use any AGEIA PhysX. (Says on the official Crysis forum, took me a while to look for it on a dialup modem). If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it? And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?

PhysX is an API like DirectX. You don't NEED DirectX to display graphics. You can use an other API like Open-GL or go the very old hardcore way and create a separate rendering lib for every card (ouch). Since physics are not a big thing in games yet, you see many games having proprietary phys-engines (ID Doom3 for example, Crysis too from what you say - I could be wrong - see wiki)

Physx is exclusive to 8 series, because they are the only ones that support an API for general (non-graphics) applications programming. See info below:

Huang revealed that Nvidia's strategy is to take the PhysX engine and port it onto CUDA. For those not in the know, CUDA stands for Compute Unified Device Architecture, and it's a C-like application programming interface Nvidia developed to let programmers write general-purpose applications that can run on GPUs. All of Nvidia's existing GeForce 8 graphics processors already support CUDA, and Huang confirmed that the cards will be able to run PhysX.
 

haslo

Moar Pie!
Jan 21, 2008
363
0
0
Bern CH
www.haslo.ch
but I guess since in UT3 and other UE3 games, the light is usually baked in, so you can't always have everything be breakable.

Usually it is, that's highly configureable of course and you can have an entire level without a single light map if you wish so, but the thing is that dynamic lights use a huge lot more processing power than light mapped (baked-in) ones, as the entire load of calculating lighting (at a higher quality, with way more lights) is processed at build-time.
 

BobTheBeheader

New Member
Aug 31, 2005
799
0
0
36
Warshington State
If Crysis doesn't need AGEIA and it still looks/feels/plays great, why would UT3 need it?
Two reasons, I'll bet. well, 3 tbh...

1. The PhysX API/library is probably better suited to their needs. The package is designed in a way that happened to be more compatible with the engine.

2. Either that or Epic liked AGEIA PhysX so much from the get-go that they decided to design portions of the game around it.

3. They have a business deal, UT3 helps to promote AGEIA (theoretically speaking). That is to say, in layman's terms, they are in bed together.


And why, if it's so needed for a better gameplay experience, is AGEIA PhysX exclusive to GeForce 8 cards only?

It could be that the specific hardware, the processors themselves, on the non-8000 series may not be physically able to accept the new software. They might not be wired in a way that could potentially support PhysX.
 

ambershee

Nimbusfish Rawks
Apr 18, 2006
4,519
7
38
37
Nomad
sheelabs.gamemod.net
Using Ageia was a way of implementing a strong, easy to employ physics engine for a low development cost. Win-win, really.

Crysis has it's own proprietary physics simulation. Probably because their engine is set up too awkwardly to employ a more generic one.
 

BobTheBeheader

New Member
Aug 31, 2005
799
0
0
36
Warshington State
Using Ageia was a way of implementing a strong, easy to employ physics engine for a low development cost. Win-win, really. Crysis has it's own proprietary physics simulation. Probably because their engine is set up too awkwardly to employ a more generic one.

After reading that CUDA article, I'm beginning to wonder if CPUs and GPUs will one day be merged into a single unit. There certainly isn't anything wrong with putting dedicated linear algebra functions on a CPU, for instance, and I imagine it would come in handy some times.
 

haslo

Moar Pie!
Jan 21, 2008
363
0
0
Bern CH
www.haslo.ch
After reading that CUDA article, I'm beginning to wonder if CPUs and GPUs will one day be merged into a single unit. There certainly isn't anything wrong with putting dedicated linear algebra functions on a CPU, for instance, and I imagine it would come in handy some times.

Right now, GPUs have faster access to dedicated memory, that's one defining difference. There's also plenty of algorithms that are hardwired into both GPUs and CPUs, both for their specific purposes (multimedia extensions and number crunching for CPUs, vector maths and matrix calculations for GPUs). This hardwiring then is done in ROM, and could theoretically all be done in EEPROM (which is flashable and reprogrammable). However, ROM is cheaper to produce than EEPROM, so with all the cost pressure it's somewhat improbable that they'll be merged and we'll only see one type of generic processor in the future, unless processors become so fast that those hardwired structures are no longer necessary (which I don't believe, it seems we always do need more processing power however much we do have). Both kinds of processes might be moved onto single chips if processor and graphics card manufacturers cooperate more tightly though. Of course, having dedicated processors for certain tasks has huge speed advantages as well.

Technically feasible to have just one kind of multi-core xPU? Sure :) It's all binary operations anyway.
 

BobTheBeheader

New Member
Aug 31, 2005
799
0
0
36
Warshington State
Right now, GPUs have faster access to dedicated memory, that's one defining difference. There's also plenty of algorithms that are hardwired into both GPUs and CPUs, both for their specific purposes (multimedia extensions and number crunching for CPUs, vector maths and matrix calculations for GPUs). This hardwiring then is done in ROM, and could theoretically all be done in EEPROM (which is flashable and reprogrammable). However, ROM is cheaper to produce than EEPROM, so with all the cost pressure it's somewhat improbable that they'll be merged and we'll only see one type of generic processor in the future, unless processors become so fast that those hardwired structures are no longer necessary (which I don't believe, it seems we always do need more processing power however much we do have). Both kinds of processes might be moved onto single chips if processor and graphics card manufacturers cooperate more tightly though. Of course, having dedicated processors for certain tasks has huge speed advantages as well.

Technically feasible to have just one kind of multi-core xPU? Sure :) It's all binary operations anyway.
Don't worry, I have several EEPROMs and a programmer sitting in a box under my staircase. ;)
 

Phopojijo

A Loose Screw
Nov 13, 2005
1,458
0
0
37
Canada
After reading that CUDA article, I'm beginning to wonder if CPUs and GPUs will one day be merged into a single unit. There certainly isn't anything wrong with putting dedicated linear algebra functions on a CPU, for instance, and I imagine it would come in handy some times.
That's what we call "The Cell" or AMD's "Torrenza"/"Fusion"
 

ambershee

Nimbusfish Rawks
Apr 18, 2006
4,519
7
38
37
Nomad
sheelabs.gamemod.net
Cell is not a GPU/CPU hybrid, it's just a particular form of microprocessor architecture. It's genuinely not special. Torrenza is the same. Fusion is a future attempt at doing it however, having separate cores for GPU/CPU specific applications (but each core will not do both, and the GPU component will be very basic). Trouble is, it'll likely use a completely custom instruction set, giving it a very low adoption rate for software developers, and thus either a very low compatibility rate with software, or a lot of software that just won't use it properly.