• bitcrafter@programming.dev
    ·
    9 months ago

    Does it really make sense to have access to the entire C++ language to run on GPUs? Aren't basic constructs that you take for granted in a complicated general purpose language like C++ super-expensive on a GPU? For example, my understanding is that whenever a GPU runs into a branch, it has to essentially run that part of the code twice: once for each branch.