DirectML through Docker Windows
-
I am trying to get DirectML to work with my CPAI instance in Docker Desktop. I do not have an NVIDA graphics card on the server. I just want to use the Intel GPU processing. Docker is using WSL 2. Which image am I supposed to be using codeproject/ai-server:gpu or codeproject/ai-server. Additionally, to I have to flag "--gpus all" when running? Are there directions that I am missing for this type of install? I have been at this for a while now but not matter what combination of installation methods I use I cannot get the .NET module to switch to DirectML it just stays with CPU. Any help would be greatly appreciated.
-
I am trying to get DirectML to work with my CPAI instance in Docker Desktop. I do not have an NVIDA graphics card on the server. I just want to use the Intel GPU processing. Docker is using WSL 2. Which image am I supposed to be using codeproject/ai-server:gpu or codeproject/ai-server. Additionally, to I have to flag "--gpus all" when running? Are there directions that I am missing for this type of install? I have been at this for a while now but not matter what combination of installation methods I use I cannot get the .NET module to switch to DirectML it just stays with CPU. Any help would be greatly appreciated.
This is the wrong place to ask this question. Please post your question here: CodeProject.AI Discussions[^]
Graeme
"I fear not the man who has practiced ten thousand kicks one time, but I fear the man that has practiced one kick ten thousand times!" - Bruce Lee