GPU ?

JWB

New member
Joined
Oct 22, 2021
Messages
19
Hi,

just curious to know if Cerberus-X/Mojo uses the GPU and not the CPU for more speed.

Thanks
 
Mojo(1) definitely not. On some platforms it is just Opengles 1. Mojo2 used Opengles 2 and you can use shaders.
What do you have in mind?
 
Mojo(1) definitely not. On some platforms it is just Opengles 1. Mojo2 used Opengles 2 and you can use shaders.
What do you have in mind?

Thanks Mike for your speedy reply.
I was just curious because I recently discovered what a huge difference the GPU makes over the CPU.
I was watching a video of a speed test for PyGame - one test showed a game like your Bouncy Aliens
getting only 450 Bouncy Aliens on the screen while still maintaining 60 FPS when using the CPU.
Then they repeated the test but using the GPU, and they got 5000 Bouncy Aliens while still maintaining 60 FPS !

So I was wondering if you also use the GPU for rendering on the screen.
 
I think you need to post a link to this video for clarification.
When someone speaks of CPU rendering, they usually mean software rendering. GPU is hardware acceleration, the CPU is still used to set up the draw commands then the GPU uses those to render the output.
Pygame if I remember is a python wrapper for SDL 1.2 and the later version uses SDL2. The desktop version of Cerberus uses GLFW.
 
Last edited:
I think you need to post a link to this video for clarification.
When someone speaks of CPU rendering, they usually mean software rendering. GPU is hardware acceleration, the CPU is still used to set up the draw commands then the GPU uses those the render the output.
Pygame if I remember is a python wrapper for SDL 1.2 and the later version uses SDL2. The desktop version of Cerberus uses GLFW.
Hi Mike,

Here is the link - what do you think ?

 
I want to show what can be done in webgraphics. This is the state of the art until WebGPU is standardised everyehere (WebGL3 wont happen). I'm interested in this (and I use Mojo2 quiet alot) so I felt maybe i can give you my two cents.


It will allow you easily 20k sprites at 60fps .. (my 2015 mac begin struggle around 40k thats where it starts goes down. Mojo2 would be able to do pretty much the same if it were not for some details that changes the numbers.

Because sayint that something uses GPU/hardware is not that specifica you need to tell *how* you use it, you can use it indirectly more directly using VBO, shaders, etc. Not everyone that uses hardware GPU use it as good as you could see in that video, There are subitileties that makes a big difference.

I'm sure you will feel this give you one of the best expeirnces that you ever had on a browser. I can get amazingly close on Android browsers using mojo2. Not quiet but that is because of those subtileis, it can make a big difference.

Unreal engine, and LibGDX is my top favorite ways of doing grahics right now. Mojo2 comes close shared 2nd place and I love to use it.

As I simple webdeveloper i code canvas directly (which is indeed GPU since 2012) i just has no shaders, which makes you turn to webgl perhaps for things like shaders and that extra speed (there is some extra speed that you will get and mojo2 is very good at getting it)
I digress you might not be interested in web technology but it atually shows native power today to the degree that it can
easily outshine native app if the native app is done badly.
 
I want to show what can be done in webgraphics. This is the state of the art until WebGPU is standardised everyehere (WebGL3 wont happen). I'm interested in this (and I use Mojo2 quiet alot) so I felt maybe i can give you my two cents.


It will allow you easily 20k sprites at 60fps .. (my 2015 mac begin struggle around 40k thats where it starts goes down. Mojo2 would be able to do pretty much the same if it were not for some details that changes the numbers.

Because sayint that something uses GPU/hardware is not that specifica you need to tell *how* you use it, you can use it indirectly more directly using VBO, shaders, etc. Not everyone that uses hardware GPU use it as good as you could see in that video, There are subitileties that makes a big difference.

I'm sure you will feel this give you one of the best expeirnces that you ever had on a browser. I can get amazingly close on Android browsers using mojo2. Not quiet but that is because of those subtileis, it can make a big difference.

Unreal engine, and LibGDX is my top favorite ways of doing grahics right now. Mojo2 comes close shared 2nd place and I love to use it.

As I simple webdeveloper i code canvas directly (which is indeed GPU since 2012) i just has no shaders, which makes you turn to webgl perhaps for things like shaders and that extra speed (there is some extra speed that you will get and mojo2 is very good at getting it)
I digress you might not be interested in web technology but it atually shows native power today to the degree that it can
easily outshine native app if the native app is done badly.
Thanks for your interesting information. I've made some games with GDevelop 5 and I'm amazed how fast it can render thanks to the new web technology.
 
I agree. If I had my way, I would turn CX into web only, using three.js or Babylon.js as the render backend.
 
Mojo(1) definitely not. On some platforms it is just Opengles 1. Mojo2 used Opengles 2 and you can use shaders.
What do you have in mind?
Of course it does. Mojo1 is OpenGL and OpenGL *is* hardware accelerated. Just because it doesn't use shaders, doesn't mean the workload falls into the CPU.

Look at the code - it uses glDrawArrays and glDrawElements - it uses GLFW.
 
Hi Mike,

Here is the link - what do you think ?

ROFL. That has nothing to do with CPU/GPU rendering. That function the guy used in Raylib is doing a TEXTURE ATLAS out of all sprites - so there is no 'texture swap' when drawing them.

Texture swap is the #1 cause of slowdowns on any game - that's why games tend to bundle everything into a single image and only draw the necessary parts of it - the moment you have to change from texture A to texture B, you'll lose a few ms.
 
@SLotman I don't think that Mojo 1 or 2 are fully hardware accelerated. Without going through the both mojo graphics modules, I'm sure that the CPU is used for matrix translations.
 
It's all about implementaion and how they *use* the GPU.

I would say that in a way it *is* software vs hardware because everything you do badly in hardware (say or instance that you do not put all your sprites in one big texture so you have to swap texture all the time, then software(cpu) have to come into play for second and change that texture and let it back to the GPU.

One could easily argue that GPU is doing everything in this case but we have readpixels and writepixels for instance where we know that CPU must step in. Its not about CPU vs GPU but about implicit actions that happen in secret bc of implemention versus you knowing about it and able to have the expertise to (maybe) combat it.

I can tell you when Mojo1 is king over Mojo2 and vice versa, it's not about one being better but how things were implemented on some leve, I would guess the sourcelevel of Mojo1/CX.

That is what I sometimes mean by implemention, half is casued by the programmer, half is under the surface of the library/commands themselves. The structure of Mojo1 vs Mojo2 are different, and reflect the needs of the standard and common practicies at the point in time they were conceved, would be my guess.
 
SDL is perfect example on bad implementatio becuase it is fully harware accelerated and tsill you wont get the quality that you might expect. This example is perfect becuase you if you do not know better, you would think that it *is* quiet good.

But look around the internet and you will see lots of concerned people that got passsed that point, and they are trying to fix everything on *their* side to mend those issues. It's not their fault, its implemntion. You have great programemers on one end, you have great hardeare on the other, and then you have shite in the middle.
 
Back
Top Bottom