okay developer news now.... from nVidia
-
Sweet! I've always wanted to use my video card to do massively parallel computation! :rolleyes:
CodeProject: "I mean where else would you rather be pissed off?" - Jeremy Falcon
wasn't one of the Axis Of Evil™ countries trying to buy PlayStations or SNES for just that reason?
-
Jeffry J. Brickley wrote:
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them.
Are they IEEE754 compliant or is that still just an ATI feature?
-- Rules of thumb should not be taken for the whole hand.
dan neely wrote:
Are they IEEE754 compliant
yes.
Each GeForce 8800 GPU stream processor is fully generalized, fully decoupled, scalar (see Scalar Processor Design Improves GPU Efficiency on this page), can dual-issue a MAD and a MUL, and supports IEEE 754 floating point precision. The stream processors are a critical component of NVIDIA GigaThread technology, where thousands of threads can be in flight within a GeForce 8800 GPU at any given instant. GigaThread technology keeps SPs fully utilized by scheduling and dispatching various types of threads (pixel, vertex, geometry, physics, etc.) for execution.
and actually ATI is not "fully" IEEE754 compliant, there are a few inaccuracies of numerical representation that limit them to saying "equivalent" to IEEE754 and half a dozen other buzzwords to show that you can do the math, but you might loose a bit on rare occasions._________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow.
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
So my question is, will there be support for other compilers?
Jeremy Falcon A multithreaded, OpenGL-enabled application.[^]
-
wasn't one of the Axis Of Evil™ countries trying to buy PlayStations or SNES for just that reason?
LOL - would YOU be worried if you knew the enemy plane was powered by a SNES ?
Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog
-
Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow.
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Jeffry J. Brickley wrote:
hope you all have brushed up on your massively parallel thread skills!
Ah, if you've seen one thread, you've seen them all. :-D Marc
People are just notoriously impossible. --DavidCrow
There's NO excuse for not commenting your code. -- John Simmons / outlaw programmer
People who say that they will refactor their code later to make it "good" don't understand refactoring, nor the art and craft of programming. -- Josh Smith -
LOL - would YOU be worried if you knew the enemy plane was powered by a SNES ?
Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog
-
Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow.
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Jeffry J. Brickley wrote:
One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
My guess would be these 128 stream processors are just another name for unified shaders, like those that on ATI Xenos on XBOX360. But then, Xenos only has 48 unified shaders, 128 is a lot more!
Jeffry J. Brickley wrote:
CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs.
I wonder if's a refined version of Cg or a new like C like language?
-
Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow.
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Bah - they want an Erlang[^] implementation that's aware of all those parallel processors - its runtime can cope with tens of thousands of processes on a single core, and the language is designed to make concurrent programming less error prone (no shared state, asynchronous message passing, things like that).
-
Bah - they want an Erlang[^] implementation that's aware of all those parallel processors - its runtime can cope with tens of thousands of processes on a single core, and the language is designed to make concurrent programming less error prone (no shared state, asynchronous message passing, things like that).
Stuart Dootson wrote:
Bah - they want an Erlang[^] implementation that's aware of all those parallel processors
and they have it, they simply assigned it a new buzzword... GigaThread™ Technology
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
-
Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?
What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow.
studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Well, I have heard about similar attempts (http://www.gpgpu.org/[^]) to use the GPU for computational purposes, like solving big sparse matrices, etc... But they used round-about mechanisms to achieve them. Since it's now being "officially" supported, using the GPU as an additional processor would be a lot easier. But, considering that a GPU is so powerful, I have one basic question. I had sometime ago developed an OpenGL-based application which "of-course" did frequent screen refreshes. And this hogged the CPU, causing other applications to run poorly. I had to implement some "hand-made" optimizations, which compromised on the ultimate output, but nevertheless served my purpose. My question therefore is, isn't it possible to somehow explicitly transfer all those graphics and computation handling to the GPU instead of the CPU?? I guess there must be some way, but could be that they are either vendor-specific. Anyone with similar experience?