Limitations of modern realtime physics engines

Please don't post Bullet support questions here, use the above forums instead.
WhiteDragon103
Posts: 2
Joined: Mon Apr 06, 2009 10:28 am

Limitations of modern realtime physics engines

Post by WhiteDragon103 »

Hello.

I'd like to know about the latest and greatest of realtime physics simulations.

Specifically, I want to know about ways that they can be improved. What limitations are there? What problems need solutions? Is there room for improvement in the algorithms?

I ask this because I am a physics engine hobbyist. I want to invent new ways to improve the performance, accuracy and efficiency of realtime physics engines.

I have my own optimization concepts which I am implementing, but the last thing I'd want to do is reinvent the wheel. However, in case my ideas are original I'd like to keep them to myself.

I'd appreciate any information which you think would be helpful.
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Re: Limitations of modern realtime physics engines

Post by Erin Catto »

WhiteDragon103 wrote:I have my own optimization concepts which I am implementing, but the last thing I'd want to do is reinvent the wheel. However, in case my ideas are original I'd like to keep them to myself.
So you want us to help you to develop some ideas that you won't share with us? That's a hard pill to swallow.
WhiteDragon103
Posts: 2
Joined: Mon Apr 06, 2009 10:28 am

Re: Limitations of modern realtime physics engines

Post by WhiteDragon103 »

Lol, no, that's not what I meant. I wanted to know about the limitations and such of modern physics engines, things that can be improved, and so on. For instance, after glancing around the forum, some people found that dropping heavy objects on light objects causes glitches.
I figured that a physics engine discussion community would bring me up to date with, perhaps, a brief summary of what kinds of things are being worked on.

Also, if by chance I did have some original ideas, telling people too much about them - especially people interested in developing this kind of stuff - would effectively be shooting myself in the foot. I don't want to take the chance that someone will steal my ideas in case I can profit from them. I understand if you cannot help me without me telling you exactly what my ideas are.
User avatar
projectileman
Posts: 109
Joined: Thu Dec 14, 2006 4:27 pm
Location: Colombia
Contact:

Re: Limitations of modern realtime physics engines

Post by projectileman »

The major limitation of current realtime physics engines is the Lack of Determinism.

A physics engine has Determinism if it could execute a simulation repeatdly always giving the same results with the same initial parameters after a fixed interval of time.

Bullet doesn't show determinism at all. Always it gives approximated results.
Same as Nvidia PhysX.
I don't now anything about Havok.

Do you know any Physics engine which is truly Deterministic??
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Limitations of modern realtime physics engines

Post by Erwin Coumans »

projectileman wrote:Bullet doesn't show determinism at all. Always it gives approximated results.
Bullet is deterministic when using the same machine and and executable, if you reset the simulation properly, disable solver randomization and use a fixed timestep (check the Bullet demos restart function). On different machines, compilers etc, the simulation will obviously be different. If you have problems with determinism, please modify a demo that shows the problem, and report it in the Bullet forum.

If you want to do networked physics, this means you have to synchronize and send the state of rigid bodies over the network, not only the user input.
See also http://gafferongames.com/gdc2009 for networked game physics.
Hope this helps,
Erwin
User avatar
lazalong
Posts: 10
Joined: Fri Mar 06, 2009 6:55 am
Location: Australia - Canberra
Contact:

Re: Limitations of modern realtime physics engines

Post by lazalong »

Erwin Coumans wrote:Bullet is deterministic when using the same machine and and executable, if you reset the simulation properly, disable solver randomization and use a fixed timestep (check the Bullet demos restart function). On different machines, compilers etc, the simulation will obviously be different.
Why?

If you use the same time steps, force inputs and the same seed for the randomiser you should be able to have identical results on any machines.

Shouldn't this be a TODO: Similar results irrespective of the machine?
bone
Posts: 231
Joined: Tue Feb 20, 2007 4:56 pm

Re: Limitations of modern realtime physics engines

Post by bone »

lazalong wrote:
Erwin Coumans wrote:Bullet is deterministic when using the same machine and and executable, if you reset the simulation properly, disable solver randomization and use a fixed timestep (check the Bullet demos restart function). On different machines, compilers etc, the simulation will obviously be different.
Why?

If you use the same time steps, force inputs and the same seed for the randomiser you should be able to have identical results on any machines.

Shouldn't this be a TODO: Similar results irrespective of the machine?
Compiler optimizations for MSVC include three different floating-point modes: strict, precise, and fast. They affect various things like IEEE conformance, whether intermediate results are stored in 32, 64, or 80 bits, equality tests, etc. So even with a single compiler on a single machine, you are likely able to achieve at least 3 different results.

Some FPUs do not support 80-bits at all, others might store up to 128 bits. Floating-point rounding modes could conceivably have different defaults on different CPUs. Now throw in the ones on video cards and I'm pretty sure you can get a wide variety of results.

In other words, I'm not convinced that this problem is the physics engine's fault in this case. Erwin's caveats still apply, of course (plus your caveat about inputs).

As you note, randomizing with the exact same seed isn't really randomizing, assuming the implementation of rand() is the exact same for each compiler, I don't know if it is ... I believe there are hardcoded numbers that could potentially be changed. Of course you could also write your own pseudo-random number generator to be the same on any machine. In either case, you would have to guarantee that no other mechanism is using the same seeded random function (for example if your graphics is running at a variable framerate and making an undetermined number of calls to the function). Disabling randomization completely in the physics engine is your safest bet.
User avatar
lazalong
Posts: 10
Joined: Fri Mar 06, 2009 6:55 am
Location: Australia - Canberra
Contact:

Re: Limitations of modern realtime physics engines

Post by lazalong »

bone wrote:Compiler optimizations for MSVC include three different floating-point modes: strict, precise, and fast. They affect various things like IEEE conformance, whether intermediate results are stored in 32, 64, or 80 bits, equality tests, etc. So even with a single compiler on a single machine, you are likely able to achieve at least 3 different results.

Some FPUs do not support 80-bits at all, others might store up to 128 bits. Floating-point rounding modes could conceivably have different defaults on different CPUs. Now throw in the ones on video cards and I'm pretty sure you can get a wide variety of results.
I see.
In other words if we want to make Bullet deterministic we would first need to make "test suites with basic mathematical operations" to identify the compiler parameters we need for each compiler & cpu combinations.

I wouldn't be surprised that such a study exist somewhere... The problem being searching it :D
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Re: Limitations of modern realtime physics engines

Post by Erin Catto »

lazalong wrote:I wouldn't be surprised that such a study exist somewhere... The problem being searching it
You will likely find many failures. I've never heard of this being successful. I know of at least one high profile game that intended to make floating point computations deterministic but failed. In the end they had to write an entire floating point unit in software using integer operations. It was too late to convert the code to fixed point.
jorjee
Posts: 7
Joined: Fri Mar 27, 2009 12:14 am

Re: Limitations of modern realtime physics engines

Post by jorjee »

In the end they had to write an entire floating point unit in software using integer operations. It was too late to convert the code to fixed point.
I wonder what that must have done to the game performance.
raigan2
Posts: 197
Joined: Sat Aug 19, 2006 11:52 pm

Re: Limitations of modern realtime physics engines

Post by raigan2 »

Erin Catto wrote:In the end they had to write an entire floating point unit in software using integer operations. It was too late to convert the code to fixed point.
I might be showing my ignorance, but would the former really take less effort than the latter? That seems crazy!
bone
Posts: 231
Joined: Tue Feb 20, 2007 4:56 pm

Re: Limitations of modern realtime physics engines

Post by bone »

It has occurred to me that there may be another obstacle to deterministic behavior. Multi-threaded optimizations of an iterative solver can be undeterministic, because you can't predict in what order a particular body might be affected when constraints on it are solved in different threads. One possible workaround is to group constraints by relation (like an island) and then solve each group in its own thread.
RobW
Posts: 33
Joined: Fri Feb 01, 2008 9:44 am

Re: Limitations of modern realtime physics engines

Post by RobW »

That's the way it already works - Bullet already solves one island per thread. In a Gauss-Seidel scheme each row is dependent on the previous row, so you can't solve rows in parallel. It would be ok for a Jacobi solver but it would require more iterations. Generally, a single constraint is too granular to constitute a job for a cpu thread anyway; but the story might be different on on a GPU.

The problem with solving an island per thread is that in pathological cases (e.g. tech demos!) many objects would be in the same island so the parallelism is poor. On the flip side, often there are just a couple of objects in an island (imagine an object just rolling/sliding on the ground) so the island is really a little too small constitute a job for a thread. When I was working with Bullet in my last job, I grouped up islands until the total number of bodies reached a threshold, then sent them off to be processed on another CPU. That fixed problem of tiny islands, but not of monolothic ones, though that doesn't really happen in real games.
Bullet is deterministic when using the same machine and and executable, if you reset the simulation properly, disable solver randomization and use a fixed timestep (check the Bullet demos restart function). On different machines, compilers etc, the simulation will obviously be different. If you have problems with determinism, please modify a demo that shows the problem, and report it in the Bullet forum.
One bug we found with determinism in Bullet is as follows:

Gravity is only applied when you enter "stepSimulation". If "stepSimulation" runs several internal timesteps and one one of those steps a sleeping object is hit, and activates, then it will not have any gravity if another internal timestep is executed before leaving "stepSimulation". This doesn't induce non-determinism by itself, but imagine that you are doing a "action replay" by recording inputs, resetting the world, and playing the inputs back. If for some other reason, the replay runs at a different framerate (even for a single frame) then the number of internal timesteps relative to the number of "stepSimulation"'s executed might differ, and you start to get divergence due to the gravity issue. This sounds very obscure but it certainly caused us problems and took a long while to track down! :)

Other than that, there were a few bugs relating to the order of contact manifolds but I'm pretty sure they have been fixed. After a few days work we got 100% determinism, and we had extensive logging to verify it. Our simulation involved lots of bodies, moving fast and hitting each other, and ending in resting contact. I think it was a pretty brutal test of determinism so I'm confident it can be achieved with Bullet. This included multi-threaded dynamics, collision detection, and constraint solving! (not the SPU version, it was my own modifications to the standard version of Bullet).

When it comes to determinism over different floating point implementations, that isn't going to be easy to solve without software emulation or fixed point.
bone
Posts: 231
Joined: Tue Feb 20, 2007 4:56 pm

Re: Limitations of modern realtime physics engines

Post by bone »

RobW wrote:That's the way it already works - Bullet already solves one island per thread. In a Gauss-Seidel scheme each row is dependent on the previous row, so you can't solve rows in parallel. It would be ok for a Jacobi solver but it would require more iterations. Generally, a single constraint is too granular to constitute a job for a cpu thread anyway; but the story might be different on on a GPU.
I wasn't talking about Bullet specifically. And I'm talking more about the Sequential Impulses method than direct PGS, even though Erin Catto has proven that they are mathematically equivalent.

One could extend the common iterative methods to solve several sections of a single island simultaneously, making sure not to have two constraints working on the same body at the same time (note: that can be difficult to do efficiently). I don't think anybody in their right mind would suggest breaking up the system constraint-by-constraint, though.

In any case, as I previously noted, this would break determinism unless done very, very carefully.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Limitations of modern realtime physics engines

Post by Erwin Coumans »

bone wrote: And I'm talking more about the Sequential Impulses method than direct PGS, even though Erin Catto has proven that they are mathematically equivalent.
Projected Gauss Seidel is the name of the algorithm, and Erin Catto introduced this PGS scheme in a more intuitive way under the name sequential impulse. AFAIK Erin hasn't provided any proof of equivalence, because they are essentially the same thing ;-)
RobW wrote: In a Gauss-Seidel scheme each row is dependent on the previous row, so you can't solve rows in parallel. It would be ok for a Jacobi solver but it would require more iterations.
We showed at GDC 2009 that by re-ordering the constraints and batching independent groups of constraints, PGS can be parallelized. See attached Takahiro Harada's GDC slides, or the precompiled 2D or 3D Win32 demos using CUDA 2.1.
The parallel constraint solver CPU and CUDA source code is available in Bullet 2.75 (beta is available for download), both for 2D and 3D, see btCudaDemoDynamicsWorld3D::createBatches in Bullet/Demos/Gpu3dDemo.
The OpenCL port will be released soon, and the solver innerloop will be made fully general PGS/SI (including accumulated impulse for clamping and warmstarting).

All constraints are gathered together (independent of island), and split into independent batches, typically a maximum of 10 large batches. The synchronization between batches removes the order-dependency, so this way of parallel solving doesn't introduce non-determinism.
Hope this helps,
Erwin
Attachments
takahiroGDC09s_1.zip
Takahiro Harada GDC 2009 slides on GPU physics
(1.08 MiB) Downloaded 905 times
Post Reply