I'm not able to answer to most of your questions (so I will leave them unanswered), but you asked:
Should I be using a low res model for feeding to bullet.
Well, of course the answer should be no, unless you can handle the overhead of vertex mapping and interpolation (to find the positions of the vertices you have excluded from the Ogre model).
But I think that to improve performance you could try to:
1) Remove double vertices from your Ogre model to feed the Bullet engine and "restore" them back when you update the Ogre mesh. (usually .mesh and .x models needs many double vertices to perform texture mapping: that does not happen with OpenGL models).
2) Refresh the Ogre mesh (i.e. the position of the vertices) less frequently. (This will help if Ogre is the bottleneck).
In order to achieve step 1, you could for example move the double vertices to the end of the mesh (updating the indices and reoptimizing them at the end by shuffling them with an Ogre API method). This way you have a mesh that you can more easily split to feed the Ogre engine and update back from Bullet to Ogre (by assigning to the last vertices the correct values).
I believe this is one of the easiest way of doing it (although it is NOT easy at all IMO), because you can just use a std::vector< unsigned short > to map the last vertices of the mesh with the "single" ones at the beginning and you don't need to use any std::map<> at all.
Hope it helps.
P.S. If you mesh has multiple submeshes, you need to move all the vertices to the "shared vertices" area, updating the indices, before starting step 1.