Posts

Showing posts from June, 2016

NVIDIA vs AMD for tensorflow

As mentioned in the z440 post , the workstation comes with a NVIDIA Quadro K5200. I've been a happy user of AMD hardware since Radeon HD 4850 (upgraded 5870 and R9 390 later). Unfortunately, tensorflow only supports Cuda - possibly due to missing OpenCL support in Eigen . Is it worth switching just for that? I did a few experiments: CPU (8core Xeon E5-1680v3) real 8m24.991s user 104m13.048s sys  0m38.772s All CPUs ~80% usage GPU (NVIDIA Quadro K5200) real 2m12.990s user 2m47.136s sys  0m30.500s All CPUs ~20% usage I think it's worth it - about 4x improvement by using the GPU vs a high end 8 core Xeon. And this GPU is 2 generations back - a GTX 1080 or newer will probably give an even higher benefit. So for now, I'll stick with NVIDIA.

HP z440

After years of self-built computers, I decided to get a workstation at home. I got a HP z440 - the 2015 single socket HP workstation. From their workstation line, it's the entry-level; the z640 and z840 are dual socket and allow for more expansion and PCIe cards. It's plenty fast; the E5 1680v3  CPU has 8 cores at 3.2GHz, up to 3.8GHz with turbo. You can get faster per-core performance with a high-clocked i7, but 8 cores are great when encoding video or compiling code. I think ECC memory makes sense. Memory errors are not very frequent, but given the small premium paid, it's well worth it. Even Google, who started with as-cheap-as-possible servers in their datacenters now uses ECC memory in their servers : The conclusion we draw is that error correcting codes are crucial for reducing the large number of memory errors to a manageable number of uncorrectable errors. Only 2 out of 8 slots are filled, which looks weird - after all it comes with 32GB. I don't think I...