Quantcast
Channel: Recent posts
Viewing all articles
Browse latest Browse all 20

A curious bandwidth result

$
0
0

Now I am testing Stream Benchmark on a Numa System including two Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz. The compiler is GCC. I use a OpenMP version with 12 threads. I know that Numa system allocate pages with first touch. So I try to allocate the three arrays (a/b/c) aligned in 2MB, and select a proper N, let the threads running both nodes. As a result, all thread can access local data(I indeed check all pages location with migrate_pages), I check the bandwidth is 20GB/s for Triad. I want to check if all thread access remote data, maybe the bandwidth is worst. So I try to let data allocate in remote memory node when the data is initialized, and all threads will compute completely with remote data. But the result is not so worst as expected. The bandwidth is 19GB/s for Triad. So I think maybe the remote access will not hurt bandwidth. But when I running with numactl(numactl -m 0 -N 1), I allocate all memory on memory node 0 and all threads are running in node 1. I only can get a bandwidth 5GB/s. I think I should get a bandwidth ~10GB since I can use half of memory devices. But why the result is so poor? 

I use vtune check the case of the remote access with threads running in both nodes (node 0 and 1), and I found that except the first iteration, the reduced iterations have not used a lot of QPI bandwith (Generally 2~3GB/s) . But for the case of remote access with threads running in a single node (node 1), the QPI seems to be used aggressively (Generally 3~5GB/s).  I don't know what tricky things in the test.


Viewing all articles
Browse latest Browse all 20

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>