Does Tensorflow View All Cpus Of One Machine As One Device?
From the experiments I run, it seems like TensorFlow uses automatically all CPUs on one machine. Furthermore, it seems like TensorFlow refers to all CPUs as /cpu:0. Am I right, th
Solution 1:
By default all CPUs available to the process are aggregated under cpu:0
device.
There's answer by mrry here showing how to create logical devices like /cpu:1
, /cpu:2
There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.
A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset
on Linux to pin specific processes to specific cores
Post a Comment for "Does Tensorflow View All Cpus Of One Machine As One Device?"