CPU Architecture and C++

Are there different C++ language implementations for different CPU architectures? Or is there a once size fits all implementation for C++ regardless of CPU architecture? Does the same apply for GPUs?
Last edited on
GPU is for intensive graphics tasks (mainly in games).
CPU is for all general-purpose tasks.
The compiler needs to be aware of the target architecture in order to be able to generate code for it.
There are special circumstances that allow some degree of abstraction. For example, a C++ compiler might generate only LLVM bytecode, and LLVM may assemble that single bytecode into code for a specific architecture. That would allow the C++ compiler itself (the frontend, as it's called in compiler parlance) to be platform-agnostic, and move the architecture-specific details to the LLVM backend. LLVM still needs to know the platform that's being targeted, though.
In science and engineering a lot of the huge number-crunching is done by parallel processing. Because GPUs tend to be cheaper than CPUs (as well as for other reasons associated with their original function) there are now a lot of "big-data" applications using parallel-processing based on large numbers of GPUs. The application platform here is CUDA (see https://en.wikipedia.org/wiki/CUDA ). It has bindings to all the big number-crunching languages such as C++ and Fortran.

GPUs are being used for a lot more than just graphics now.
Last edited on
The very purpose of the programming languages is to provide an abstraction. We, developers, do use C++ through its interface. That interface does not specify architecture. The implementers of compiler&standard library obviously have to do it in terms of the architecture.

A compiler (e.g. GCC) can support multiple architectures, but it won't use the same binary in all of them.


The GPGPU (use of GPU for general purpose computing) is almost as old as GPU.
CUDA is NVidia's product. There is also OpenCL https://en.wikipedia.org/wiki/OpenCL
The GPU's differ from CPU's quite a lot, but the CUDA/OpenCL try to minimize that gap.

@SakurasouBusters: You surely know that some discrete GPU's have no connector for monitor and are made only for number-crunching?
Topic archived. No new replies allowed.