Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. okay developer news now.... from nVidia

okay developer news now.... from nVidia

Scheduled Pinned Locked Moved The Lounge
questionhtmlasp-netcomgraphics
13 Posts 11 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D David Stone

    Sweet! I've always wanted to use my video card to do massively parallel computation! :rolleyes:


    CodeProject: "I mean where else would you rather be pissed off?" - Jeremy Falcon

    C Offline
    C Offline
    Chris Losinger
    wrote on last edited by
    #4

    wasn't one of the Axis Of Evil™ countries trying to buy PlayStations or SNES for just that reason?

    image processing | blogging

    C 1 Reply Last reply
    0
    • D Dan Neely

      Jeffry J. Brickley wrote:

      studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them.

      Are they IEEE754 compliant or is that still just an ATI feature?

      -- Rules of thumb should not be taken for the whole hand.

      E Offline
      E Offline
      El Corazon
      wrote on last edited by
      #5

      dan neely wrote:

      Are they IEEE754 compliant

      yes. Each GeForce 8800 GPU stream processor is fully generalized, fully decoupled, scalar (see Scalar Processor Design Improves GPU Efficiency on this page), can dual-issue a MAD and a MUL, and supports IEEE 754 floating point precision. The stream processors are a critical component of NVIDIA GigaThread technology, where thousands of threads can be in flight within a GeForce 8800 GPU at any given instant. GigaThread technology keeps SPs fully utilized by scheduling and dispatching various types of threads (pixel, vertex, geometry, physics, etc.) for execution. and actually ATI is not "fully" IEEE754 compliant, there are a few inaccuracies of numerical representation that limit them to saying "equivalent" to IEEE754 and half a dozen other buzzwords to show that you can do the math, but you might loose a bit on rare occasions.

      _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

      1 Reply Last reply
      0
      • E El Corazon

        Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it? What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow. studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D

        _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

        J Offline
        J Offline
        Jeremy Falcon
        wrote on last edited by
        #6

        So my question is, will there be support for other compilers?

        Jeremy Falcon A multithreaded, OpenGL-enabled application.[^]

        1 Reply Last reply
        0
        • C Chris Losinger

          wasn't one of the Axis Of Evil™ countries trying to buy PlayStations or SNES for just that reason?

          image processing | blogging

          C Offline
          C Offline
          Christian Graus
          wrote on last edited by
          #7

          LOL - would YOU be worried if you knew the enemy plane was powered by a SNES ?

          Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog

          R 1 Reply Last reply
          0
          • E El Corazon

            Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it? What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow. studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D

            _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

            M Offline
            M Offline
            Marc Clifton
            wrote on last edited by
            #8

            Jeffry J. Brickley wrote:

            hope you all have brushed up on your massively parallel thread skills!

            Ah, if you've seen one thread, you've seen them all. :-D Marc

            Thyme In The Country

            People are just notoriously impossible. --DavidCrow
            There's NO excuse for not commenting your code. -- John Simmons / outlaw programmer
            People who say that they will refactor their code later to make it "good" don't understand refactoring, nor the art and craft of programming. -- Josh Smith

            1 Reply Last reply
            0
            • C Christian Graus

              LOL - would YOU be worried if you knew the enemy plane was powered by a SNES ?

              Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog

              R Offline
              R Offline
              Rick York
              wrote on last edited by
              #9

              Only if I was the one flying it. :) I guess it wouldn't be an enemy plane then though. :doh:

              1 Reply Last reply
              0
              • E El Corazon

                Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it? What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow. studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D

                _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

                L Offline
                L Offline
                Link2006
                wrote on last edited by
                #10

                Jeffry J. Brickley wrote:

                One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it?

                My guess would be these 128 stream processors are just another name for unified shaders, like those that on ATI Xenos on XBOX360. But then, Xenos only has 48 unified shaders, 128 is a lot more!

                Jeffry J. Brickley wrote:

                CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs.

                I wonder if's a refined version of Cg or a new like C like language?

                1 Reply Last reply
                0
                • E El Corazon

                  Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it? What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow. studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D

                  _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

                  S Offline
                  S Offline
                  Stuart Dootson
                  wrote on last edited by
                  #11

                  Bah - they want an Erlang[^] implementation that's aware of all those parallel processors - its runtime can cope with tens of thousands of processes on a single core, and the language is designed to make concurrent programming less error prone (no shared state, asynchronous message passing, things like that).

                  E 1 Reply Last reply
                  0
                  • S Stuart Dootson

                    Bah - they want an Erlang[^] implementation that's aware of all those parallel processors - its runtime can cope with tens of thousands of processes on a single core, and the language is designed to make concurrent programming less error prone (no shared state, asynchronous message passing, things like that).

                    E Offline
                    E Offline
                    El Corazon
                    wrote on last edited by
                    #12

                    Stuart Dootson wrote:

                    Bah - they want an Erlang[^] implementation that's aware of all those parallel processors

                    and they have it, they simply assigned it a new buzzword... GigaThread™ Technology

                    _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

                    1 Reply Last reply
                    0
                    • E El Corazon

                      Now that the NDA is lifted a lot of new stuff can be discussed. One of those buzzwords that probably didn't make much sense is "CUDA enabled stream processors". One hundred and twenty eight CUDA processors. Nice buzzword, rolls off the tongue about as gracefully as me on a dancefloor... but hey.... what is it? What is CUDA technology? GPU computing with CUDA technology is an innovative combination of computing features in next generation NVIDIA GPUs that are accessed through a standard ‘C’ language. Where previous generation GPUs were based on “streaming shader programs”, CUDA programmers use ‘C’ to create programs called threads that are similar to multi-threading programs on traditional CPUs. In contrast to multi-core CPUs, where only a few threads execute at the same time, NVIDIA GPUs featuring CUDA technology process thousands of threads simultaneously enabling a higher capacity of information flow. studied up on your threads yet? You just got 128 parallel stream processors with a free C compiler to address them. http://developer.nvidia.com/object/cuda.html[^] Not a real big surprise, per se, previous computation systems could use "C like" languages, and the Brooke C compiler from Standford http://graphics.stanford.edu/projects/brookgpu/[^] has been available for a while. The difference might be considered a more uniform approach, the GPU is designed for scaler parallel tasks and the compiler is designed for the GPU. Kind of like Intel compilers using "known" processing efficiencies of the Intel chips and forming your code to run exceedingly well on an Intel CPU. Thus the CUDA environment means you get an nVidia branded supercomputer on your desk to play with, but it doesn't play well with ATI. :rolleyes: no big surprise there either. :-D hope you all have brushed up on your massively parallel thread skills! :-D

                      _________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)

                      K Offline
                      K Offline
                      Krishnan V
                      wrote on last edited by
                      #13

                      Well, I have heard about similar attempts (http://www.gpgpu.org/[^]) to use the GPU for computational purposes, like solving big sparse matrices, etc... But they used round-about mechanisms to achieve them. Since it's now being "officially" supported, using the GPU as an additional processor would be a lot easier. But, considering that a GPU is so powerful, I have one basic question. I had sometime ago developed an OpenGL-based application which "of-course" did frequent screen refreshes. And this hogged the CPU, causing other applications to run poorly. I had to implement some "hand-made" optimizations, which compromised on the ultimate output, but nevertheless served my purpose. My question therefore is, isn't it possible to somehow explicitly transfer all those graphics and computation handling to the GPU instead of the CPU?? I guess there must be some way, but could be that they are either vendor-specific. Anyone with similar experience?

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups