Physics Simulation Forum

 

All times are UTC




Post new topic Reply to topic  [ 3 posts ] 
Author Message
PostPosted: Tue Jan 03, 2017 7:23 pm 
Offline

Joined: Tue Jan 03, 2017 6:35 pm
Posts: 1
So I've seen lots of youtube videos of physics simulations. A lot of these simulations have lots of particles. 1000's sometimes 10,000's.
Some are blocks (1000's of objects). Some are fluid-like simulations (10,000's of objects).

I am working on a simulation. I've got a long way to go but I've already done a lot of work. Right now my sim works well with about 5000 objects in javascript. That's just doing basic physics (my simulation should have some special if statements and triggering conditions). I need more objects. I need my program to be more efficient. So I need to re-write it in C++.

Here's my main concern. I know I'll get more power, but it would really help if I knew how much my simulation's features would slow down my program. In fluid simulations or basic physics simulations, all objects interact with eachother relatively the same. In the simulation I'm trying to build each particle has unique data associated with. It still has physical interactions, but it stores a record and it has "if" conditions for when certain things happen. Are these unique step-like-functions for each particle going to make my simulation more process-expensive?

It would be impossible to ask for hard numbers, but I Just need a genera idea if this is something that would increase the processing by 100 fold or not.


Top
 Profile  
 
PostPosted: Fri Jan 06, 2017 11:31 am 
Offline

Joined: Mon Jun 06, 2016 9:34 pm
Posts: 6
Quote:
but I Just need a genera idea if this is something that would increase the processing by 100 fold or not.


Short answer: no.

Having first learned to program on an 8051, I still find it jarring when confronted with languages which 'do more' being as performant as those which run closer to the metal. Micro-benchmarks can be manipulated to show almost anything of course. The only way you'll know for sure is to profile your simulation, find the 20% of code that takes 80% of the time, port it to C++ and measure again.

However, with caveats...
http://unriskinsight.blogspot.co.uk/201 ... olves.html

...I doubt you'd get anywhere near a 100x speedup...
http://onlinevillage.blogspot.co.uk/201 ... han-c.html
https://www.dinochiesa.net/?p=920

(If you do profile, post back and let us know what the results were!)

Sj


Top
 Profile  
 
PostPosted: Fri Jan 06, 2017 5:35 pm 
Offline

Joined: Tue Feb 20, 2007 4:56 pm
Posts: 228
My opinion is that you could only achieve 100x speedup if your algorithm could work on the GPU. But you mention "if" conditions and unique data which is going to make a GPU solution not only difficult (or impossible), but much slower than many types of GPU-based particle systems.

Now could you achieve a 10x speedup with a skilled re-write in C++? Probably not, although it's *conceivable*. I agree with sebjf's whole post, and his links are useful for some rough ideas.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group