Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. The Insider News
  4. Multi-threaded computing across multiple processors demoed — promises big gains in AI performance and efficiency

Multi-threaded computing across multiple processors demoed — promises big gains in AI performance and efficiency

Scheduled Pinned Locked Moved The Insider News
comhardwareperformance
4 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K Offline
    K Offline
    Kent Sharkey
    wrote on last edited by
    #1

    Tom's Hardware[^]:

    Simultaneous and Heterogeneous Multithreading (SHMT) may be the solution that can harness the power of a device's CPU, GPU, and AI accelerator all at once, according to a research paper from the University of California, Riverside.

    Multi all the things!

    O T 2 Replies Last reply
    0
    • K Kent Sharkey

      Tom's Hardware[^]:

      Simultaneous and Heterogeneous Multithreading (SHMT) may be the solution that can harness the power of a device's CPU, GPU, and AI accelerator all at once, according to a research paper from the University of California, Riverside.

      Multi all the things!

      O Offline
      O Offline
      obermd
      wrote on last edited by
      #2

      Read the article and it glosses over a fundamental assumption. Like all multithreading systems, you'll only benefit if your problem lends itself to this specific solution.

      D 1 Reply Last reply
      0
      • O obermd

        Read the article and it glosses over a fundamental assumption. Like all multithreading systems, you'll only benefit if your problem lends itself to this specific solution.

        D Offline
        D Offline
        Daniel Pfeffer
        wrote on last edited by
        #3

        See also [Amdahl's law](https://en.wikipedia.org/wiki/Amdahl's\_law)

        Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

        1 Reply Last reply
        0
        • K Kent Sharkey

          Tom's Hardware[^]:

          Simultaneous and Heterogeneous Multithreading (SHMT) may be the solution that can harness the power of a device's CPU, GPU, and AI accelerator all at once, according to a research paper from the University of California, Riverside.

          Multi all the things!

          T Offline
          T Offline
          trønderen
          wrote on last edited by
          #4

          I'm not getting it. That is: I am not getting what is new about this. This is what we have done since spooling ('Synchronous Peripheral Output On Line) and double buffering was invented in the 1960s. (Or was it as far back as the late 1950s?) We have let DMA devices and screen cards offload the main CPU for decades. Mainframes have had all sorts of 'backend' processors, running tasks in parallel with a bunch of other backends, intelligent I/O devices and whatnots. Even my first PC was not so primitive that it ran like the leftmost alternative in the illustration in the article; it did disk I/O and screen handling independent of the CPU. Long before that, I worked on mainframes with frontends (they were referred to as 'channel units') where 1536 users could simultaneously edit their source code without disturbing the CPU; the compiler ran on the CPU, though. It was said that each of the three channel units were more complex than the CPU. It sounds more like these guys are working on automating the balancing of loads on the available units, a task we to some degree are doing by hand crafting, even today. It is far from the first attempt at automating it; one of the better known ones is Wikipedia: Linda[^]. The Linda model is not based on a central scheduler, but distributed among all processing units, picking tasks from a list called 'tuple space' which is like a database relation: The tuple attributes indicates processor requirements, so each processor selects an entry using a predicate expressing its own capabilities. Maybe this new project makes some significant and genuinely new contributions, but I fail to see it from the article. If it is just a new, centralized scheduler for fine-grained tasks to the unit capable of running them, I am not impressed.

          Religious freedom is the freedom to say that two plus two make five.

          1 Reply Last reply
          0
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups