Skip to main content

NVIDIA, CUDA and PhysX

Music is my hot PhysX.

Dark blue icons of video game controllers on a light blue background
Image credit: Eurogamer

3D card manufacturers shouldn't take this the wrong way, but it takes a lot to make us crawl out of the communal Eurogamer bed (yes, all the Eurogamer writers share a single large bed - we do it for frugality and communality, which remain our watchwords) and go to a hardware presentation. There's a nagging fear someone may talk maths at us and we'd come home clutching the local equivalent of magic beans. And then we'll be laughed at by our fellow writers and made to sleep in the chilly end where the covers are thin and Tom left dubious stains. That's no fun at all.

Then again, there's some things you can't help but go and have a gawk at. So when an invite claims, "All too often new hardware brings with it a small performance increase - maybe a 5-10 percent over the previous fastest thing. Wouldn't it be far more exciting to see a speed increase of x20 or even x100... well, we'll be happy to show just that on Friday," you have wander along. Even though you suspect it may be a trap and they're going to attack you with ill-shaped blades, you have to find out what on earth they're talking about.

As we suspected, it wasn't quite what we were hoping for. Sure, there are programs which gain a x100 increase via the methods NVIDIA talks about on this particular Friday, but unless you're working in economics or astrophysics modelling, it's not exactly that relevant. However, something more quietly astounding was explained. Mainly, that despite the fact that no-one you know bought a PhysX card, if you're a PC gamer with a relatively recent NVIDIA card, you've already got one. Or, at least, you will soon. Spooks.

Get him!

The primary idea NVIDIA was trying to push was Optimised PC - the approach discussed in Rob Fahey's interview with Roy Taylor the other day. The idea being that the traditional PC approach where you buy the fastest PC processor you can doesn't actually lend the best results, at least in most situations. If you spent more on - predictably - a GPU-driven 3D card, for an increasing number of areas, you're going to get much higher performance. If the program is using the GPU in a meaningful way, anyway. NVIDIA highlights areas like image-processing and HD video-encoding, as well as - natch! - games. You lose in single-threaded activities - like, say, just booting up a program - but they argue a small loss in opening a Word Document is less noticeable than frames in games or similar.

Where it starts getting interesting is NVIDIA's development language, CUDA. The problem with all the threading programming methods is that it's radically different to single-threading (and, yes, we're getting into, "Why would anyone care about this but a programmer?" territory, but its background for the key point later). It's hard to do, and CUDA is basically a way to make things more accessible.

NVIDIA claims anyone experienced in C or C++ will be able to get a grip on it (i.e. not us, but the aforementioned programmers). This means that anyone who codes in CUDA can program the GPU to do pretty much whatever they like; it's by turning the 3D card into a bank of processors that the financial analysts and the astrophysics guys are getting such impressive results. And impressive savings, as it's a lot cheaper to do it this way.