Yes, I've looked into that quite a bit, let's say you have a computer that has a native resolution of 1366 x ?
Often it maks a huge difference if you pick 1280 x ? instead, which is suprising in a way you would think they match the GPU with the native resolution to produce a good product (Apple is great at this).
So "Slow" computers are often just misstreated hardware, they don't have hardware that matches their native resolution.
If you start respecting that fact, you can find the actual resolution that a machine is comfortable using, it will make an enormous difference.
I have some sourcecode that runs incredibly nice (steady 60 fps) across many different devices, and then I try some supersimple demo that's not correctly written and get reminded of how bad it can get. The cost of execution is well hidden in patterns that you have to learn for yourself per platform.
Android is the only one that is giving me a hard time still so it stands out in being particular difficult
I think part of it is becaus it runs JAVA so you must think a lot about the actual datastructures and such that you use.
For instance tiling more than 8000 tiles is not possible within one frame I can tell you right away. Mauby not becaue the graphicschip can't handle it but becuase the quickest way that you can go through an array in JAVA is not quick enough.
EDIT : btw I've checked how CX translates it's code to JAVA and CX does JAVA arrays in one of the best ways (as far as I can see).
The true fillrate on ay Android devices is easy to spot though they always lay around 2-3x their native resolution (they're supposed to handle x4 but that last bit gets lost somewhere). I normally aim for 2x that seem to work perfectly smooth even when you put your phone in battery saving mode.
I hope this information is of any help and that you find out a way to paint that huge scene as you wish!