Author |
Message |
|
GTX570 SC (running on 266.xx drivers against instructions)
running 121*2^4207281-1 Time per iteration: 1.303ms.
currently at 40% completion
same # running on one cpu/1thread======>.2.437ms.
i7-920 @ 3.75Gz
the 570 is using about 33% of one cpu |
|
|
|
Do you have the final results? (timings)
EDIT:
By the iterations you post I see the GPU client is 1.9 x faster than the CPU client but the GPU uses, as you stated, 33 % of one core of your 4-core machine.
Regarding that the TDP of the GTX570 SC is 214 W and of the CPU is 130 W at stock speed ( 130*3.75/2.66 overclocked, if consumption linear), and that the GPU uses 33 % of one core, then I can determine that the GPU client is 2.7 x less energy efficient than the CPU client running on all 4 cores.
For the GPU to have the same energy efficiency than the CPU it should have a time iteration of 0.476 ms for the current size of number, meaning that the GPU client should be at least 5.1 x faster than the CPU client.
Carlos
____________
|
|
|
|
I agree.
But remember I was running the older drivers, so not
a true test.
LLRCUDA is in it's earliest forms.
Remember that about a year ago people
where saying that LLR would never run on a GPU.
We are still in the earliest code.
My Kill--A-Watt meter says 442 watts
running the GTX and 4 Cores at once.
By the way, final timing was
EVGA GTX570 SC -------5522 seconds.
CPU -----------------------10494 seconds.
CPU running 4 cores ---14792 seconds average
There should be a better result running an SoB unit
which I will try to do.
Cheers. |
|
|
|
Pilgrim,
With your new results I got:
Although the GPU is 2,68x faster than one core of the CPU it is 1,91x less energy efficient than the CPU with 4 cores running.
I used the standard TDP for both CPU and GPU. Can you make a test? First see how much watts consumes the machine with only the GPU crunching and then GPU off with all 4 cores on. In this way calculations will be more realistic.
Carlos
____________
|
|
|
|
o.k.
I was away for a few days so will get you the info...... |
|
|
|
sorry for the delay Carlos....
i7/920 o/c'd to 3.75Ghz
running only 4 CPU's (4 threads)...290 watts 15,000 seconds/unit
running only GTX570 SC ...............360 watts 5,700 seconds/unit
running all 5 above ......................460 watts
Cheers |
|
|
|
sorry for the delay Carlos....
i7/920 o/c'd to 3.75Ghz
running only 4 CPU's (4 threads)...290 watts 15,000 seconds/unit
running only GTX570 SC ...............360 watts 5,700 seconds/unit
running all 5 above ......................460 watts
Cheers
My result with your measures: although the GPU is 2,63x faster than one core of the CPU it is 1,89x less energy efficient than the CPU.
Excel worksheet:
____________
|
|
|
BiBi Volunteer tester Send message
Joined: 6 Mar 10 Posts: 151 ID: 56425 Credit: 34,290,031 RAC: 0
                   
|
@Pilgrim: Do you also have a measurement in Watt for an Idle system only showing the login/desktop?
I think it is good to take the energy effectiveness of a GPU into account. |
|
|
|
Kill-A-Watt says 200 watts with idle mode. |
|
|
BiBi Volunteer tester Send message
Joined: 6 Mar 10 Posts: 151 ID: 56425 Credit: 34,290,031 RAC: 0
                   
|
I cannot agree with the calculations that are made for the GPU energy effectiveness.
Most GPU's are used as an addon devices. Not many crunchers let their CPU idle when the GPU is running; because of this the power used when idleing should be assigned to the CPU.
Doing the calculations this way (using a CPU usage of 0% for the GPU) the GPU is more energy efficient.
The GPU only uses 269 Wh per candidate (more efficient)
@Pilgrim: Do you have time measurements for both GPU and CPU candidates running both processors? (Needed to calculate the CPU usage of the GPU) |
|
|
|
The GPU always runs a unit in the same amount of time
regarless of the load on the CPU.
I am not sure what you mean about "both processors"
because I have an i7-920 but I am not using HT.
And to add the excitement, while Folding at home uses
less than 10% of a CPU core LLRCUDA uses around 60-70%
of a core.
The GPU always finishes around 5700 seconds.
The CPU's (4) are now taking over 17,000 seconds
for 4 cores/units. Of course, divide by four and it seems/is
the GPU is energy inefficient BUT I could never get
below 12000 seconds/unit running just one Core/unit
(without GPU) so it seems to my brain dead mind
that even though the GPU is a power hog it still beats
One CPU core in a race, even considering the power usage
of the GPU. What am I missing?
I think the GPU might have a better comparison on
REALLY big numbers, like 14 million digits?
Let me know what else you need.
Cheers |
|
|
|
p.s....the numbers in previous post are when
all four cores AND GPU are running...just to clear that up. |
|
|
BiBi Volunteer tester Send message
Joined: 6 Mar 10 Posts: 151 ID: 56425 Credit: 34,290,031 RAC: 0
                   
|
It would be nice to have the CPU times with the GPU running and without.
With these calculate the real CPU usage of the GPU app.
I hope they get the GPU app faster, it will also make it more effective regarding power consumption. I think Carlos made a good point. |
|
|
|
GPU not running:
CPU running 4 cores ---17192 seconds average
CPU % utilization:50%
==================================
GPU running:
EVGA GTX570 SC -------5722 seconds.
CPU running 4 cores ---17192 seconds average
CPU % Utilization: 57%
|
|
|
|
The GPU always finishes around 5700 seconds.
Only the GPU, CPU off, right?
The CPU's (4) are now taking over 17,000 seconds
for 4 cores/units. Of course, divide by four and it seems/is
the GPU is energy inefficient BUT I could never get
below 12000 seconds/unit running just one Core/unit
(without GPU) so it seems to my brain dead mind
that even though the GPU is a power hog it still beats
One CPU core in a race, even considering the power usage
of the GPU. What am I missing?
If you divide by four the consumption will also be divided by four if each core consumes the same (linear).
What matters here is to know which one makes more (crunches more candidates) with less energy, that is what it is in stake, ok?
I am comparing specific energy, it's a ratio, the CPU is still better than the GPU. It gets worst if the CPU has more cores.
I think the GPU might have a better comparison on
REALLY big numbers, like 14 million digits?
We need to test to see.
For the GPU to be faster and more energy efficient it has to test the current number in less than 3,400 seconds, for CPU time of 17,000 seconds, even do if the former uses % of CPU. We need to be more accurate about % of CPU used by GPU. One thing I am sure, my first calculations with TDP of both CPU and GPU are not much away from the true energy measurements made by Pilgrim.
If you guys want I can share my excel spreadsheet to see if we can all tune the calculations.
____________
|
|
|
BiBi Volunteer tester Send message
Joined: 6 Mar 10 Posts: 151 ID: 56425 Credit: 34,290,031 RAC: 0
                   
|
@Pilgrim:
The processor you are using has 8 threads; do you have a cooling problem or some other reason why it is not running all 8 threads? (This could reduce the power consumption per task)
The CPU utilization for the GPU is indeed 0% because it does not slow the threads (because the HT ;) )
@Carlos: the calculations are quite easy; the CPU is using 290 W and the GPU 170 W (based on the reasoning that GPU always runs together with CPU)
P(W) t(s) E(Wh) E(Wh/task)
CPU 290 17192 1385 346
GPU 170 5722 270 270
|
|
|
|
yes, my computers are in a warm environment and
even though they are dust free, have 6-8 fans each
and are air cooled with the Noctura 14 my RealTemp
runs about 75 degrees with four cores. 8 threads
pushes the temperature into the mid 80's unless
I cut back on the O/C.
I will start running 6 threads soon and find I lose only about
20% throughput versus 8 threads. The law of diminshing returns
with the HT feature. |
|
|
|
@Carlos: the calculations are quite easy; the CPU is using 290 W and the GPU 170 W (based on the reasoning that GPU always runs together with CPU)
P(W) t(s) E(Wh) E(Wh/task)
CPU 290 17192 1385 346
GPU 170 5722 270 270
In post:
http://www.primegrid.com/forum_thread.php?id=3235&nowrap=true#35089
Pilgrim said only GPU running.
Your calculations are wrong:
P(W) t(s) E(Wh) E(Wh/task)
CPU 290 17192 1385 346
GPU 360 5722 572 572
____________
|
|
|
BiBi Volunteer tester Send message
Joined: 6 Mar 10 Posts: 151 ID: 56425 Credit: 34,290,031 RAC: 0
                   
|
Why?
Your calculations are wrong:
P(W) t(s) E(Wh) E(Wh/task)
CPU 290 17192 1385 346
GPU 360 5722 572 572
I do not think my calculations are wrong they are quite simple and you are also using my calculations with a new input parameter (dropping my assumption)
I would advice Pilgrim to use the CPU; while running his GPU. In this situation with the assumption that all the power used to get a system running is attributed to the GPU the CPU is highly energy efficient.
P(W) t(s) E(Wh) E(Wh/task)
CPU 100 17192 478 119.5 |
|
|