I'm not getting it. That is: I am not getting what is new about this. This is what we have done since spooling ('Synchronous Peripheral Output On Line) and double buffering was invented in the 1960s. (Or was it as far back as the late 1950s?) We have let DMA devices and screen cards offload the main CPU for decades. Mainframes have had all sorts of 'backend' processors, running tasks in parallel with a bunch of other backends, intelligent I/O devices and whatnots. Even my first PC was not so primitive that it ran like the leftmost alternative in the illustration in the article; it did disk I/O and screen handling independent of the CPU. Long before that, I worked on mainframes with frontends (they were referred to as 'channel units') where 1536 users could simultaneously edit their source code without disturbing the CPU; the compiler ran on the CPU, though. It was said that each of the three channel units were more complex than the CPU. It sounds more like these guys are working on automating the balancing of loads on the available units, a task we to some degree are doing by hand crafting, even today. It is far from the first attempt at automating it; one of the better known ones is Wikipedia: Linda[^]. The Linda model is not based on a central scheduler, but distributed among all processing units, picking tasks from a list called 'tuple space' which is like a database relation: The tuple attributes indicates processor requirements, so each processor selects an entry using a predicate expressing its own capabilities. Maybe this new project makes some significant and genuinely new contributions, but I fail to see it from the article. If it is just a new, centralized scheduler for fine-grained tasks to the unit capable of running them, I am not impressed.
Religious freedom is the freedom to say that two plus two make five.