GPU

Hi all,
Does anyone know if MIMD is possible on a GPU ? Also do GPUs support double precision operations yet ?

Thanks.
Rat.
A google of "MIMD GPU" seems to indicate that it is something being researched, but not yet publicly available.
closed account (1yR4jE8b)
Most GPUs support double precision operations, but from my experience, you're looking at a performance hit of about 50%.
They'd be faster than on a CPU, though, right?
closed account (1yR4jE8b)
Probably
GPUs are designed for SIMD. I'm not sure what the benefits of performing MIMD operations on the GPU. Isn't that what the CPU is for?
@ Gaminic: What do you mean you're not sure what the benefits would be? You mean you don't want to add four more processors and another GB or two of RAM to the processing power of your application without spending more money? GDDR is DDR. A GPU is just another type of CPU, the instruction sets are different I'll give you that but that shouldn't really matter to anyone unless they are programming in Assembly. These days discrete graphics cards are pretty much computers inside your computer; my question to you is how can you NOT want to tinker with that?

@ darlestfright: Care to share more about your experiance with this? I was in the same boat as roberts in that I thought this was still theoretical. I wasn't aware of any libraries that allow you to off-load a thread onto the graphics card yet. This is something that interests me.
@Computergeek01:

GPU are basically specialized (weak) CPUs. What would the point be of making them more generally useful? They're specialized for 'repetitive tasks' that (apparently) are very common in graphical things. Making them better at 'general tasks' just turns them into CPUs again.
@ Gaminic: I don't suggest changing anything about the way the GPU operates, I simply want the ability to experiment with the hardware that I own. One example that a graphics card might be better then a CPU at is hashing through a rainbow table (for anyone who is wondering I'm a Sys Admin, security audits are part of my job).
Ah, I misunderstood. I thought the issue was "Can we design a GPU that excels at MIMD?".
closed account (1yR4jE8b)
Care to share more about your experiance with this? I was in the same boat as roberts in that I thought this was still theoretical. I wasn't aware of any libraries that allow you to off-load a thread onto the graphics card yet. This is something that interests me.


My experience is programming with CUDA and OpenCL, mostly in the pure number-crunching sense doing molecular dynamics simulations on the GPU instead of traditional cpu-based implementations (threading, openmp, server clusters, etc...). Of course, this is all SIMD stuff, I know pretty much nothing about MIMD.
Topic archived. No new replies allowed.