PrimeGrid
Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants

Community

Leader Boards

Results

Other

drummers-lowrise

Advanced search

Message boards : Problems and Help : Running dual 1080ti and 7920X

Author Message
bigsinkyProject donor
Send message
Joined: 11 Jun 11
Posts: 7
ID: 101863
Credit: 265,579,579
RAC: 4
321 LLR Amethyst: Earned 1,000,000 credits (1,028,822)Cullen LLR Gold: Earned 500,000 credits (867,534)ESP LLR Amethyst: Earned 1,000,000 credits (1,039,442)Generalized Cullen/Woodall LLR Gold: Earned 500,000 credits (751,979)PPS LLR Gold: Earned 500,000 credits (917,513)PSP LLR Amethyst: Earned 1,000,000 credits (1,280,626)SoB LLR Ruby: Earned 2,000,000 credits (2,639,455)SR5 LLR Silver: Earned 100,000 credits (150,018)SGS LLR Gold: Earned 500,000 credits (594,127)TRP LLR Silver: Earned 100,000 credits (295,025)Woodall LLR Gold: Earned 500,000 credits (911,059)321 Sieve Gold: Earned 500,000 credits (668,434)Generalized Cullen/Woodall Sieve (suspended) Ruby: Earned 2,000,000 credits (2,665,377)PPS Sieve Double Silver: Earned 200,000,000 credits (200,424,029)AP 26/27 Turquoise: Earned 5,000,000 credits (5,611,684)GFN Jade: Earned 10,000,000 credits (12,852,063)PSA Sapphire: Earned 20,000,000 credits (32,882,029)
Message 134333 - Posted: 29 Oct 2019 | 13:47:56 UTC
Last modified: 29 Oct 2019 | 14:18:44 UTC

Could some who is knowledgeable with app_config.xml have a look and tell me if it looks ok. I used to run PPS seive only for the credits but now i want find primes and have started to use both the 1080ti to run all the CUDA appplications as well. just wondering if the times for processing genefer WU is about right. both gpus run at 100% load @ about 50C. the CPUs are using mt and run ok at 100% load @ 65c. This is my app_info.xml file sorry its pretty long. just under 3 hours for gen 20 and 10 hours for gen 21

TIA

bigsinky

<app_config>

<app>
<name>pps_sr2sieve</name>
<gpu_versions>
<gpu_usage>0.25</gpu_usage>
<cpu_usage>0.25</cpu_usage>
</gpu_versions>
</app>

<app>
<name>llr321</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llr321</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrCUL</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrCUL</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrESP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>

<app_name>llrESP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrGCW</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrGCW</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPSP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrPSP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPPS</name>
<max_concurrent>1</max_concurrent>
<fraction_done_exact>1</fraction_done_exact>
</app>
<app_version>
<app_name>llrPPS</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPPSE</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrPPSE</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrMEGA</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrMEGA</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrSOB</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrSOB</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrSR5</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrSR5</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrTPS</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrTPS</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrTRP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrTRP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrWOO</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrWOO</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>ap26</name>
</app>
<app_version>
<app_name>ap26</app_name>
<plan_class>OCL_cuda_AP27</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer_extreme</name>
</app>
<app_version>
<app_name>genefer_extreme</app_name>
<plan_class>OCLcudaGFNEXTREME</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer_wr</name>
</app>
<app_version>
<app_name>genefer_wr</app_name>
<plan_class>OCLcudaGFNWR</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer15</name>
</app>
<app_version>
<app_name>genefer15</app_name>
<plan_class>OCLcudaGFN15</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer16</name>
</app>
<app_version>
<app_name>genefer16</app_name>
<plan_class>OCLcudaGFN16</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer17low</name>
</app>
<app_version>
<app_name>genefer17low</app_name>
<plan_class>OCLcudaGFN17LOW</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer17mega</name>
</app>
<app_version>
<app_name>genefer17mega</app_name>
<plan_class>OCLcudaGFN17MEGA</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer18</name>
</app>
<app_version>
<app_name>genefer18</app_name>
<plan_class>OCLcudaGFN18</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer19</name>
</app>
<app_version>
<app_name>genefer19</app_name>
<plan_class>OCLcudaGFN19</plan_class>
<cmdline>-compute</cmdline>
</app_version>

</app_config>

lashrasch
Send message
Joined: 30 Nov 07
Posts: 31
ID: 15484
Credit: 194,105,497
RAC: 19,475
321 LLR Bronze: Earned 10,000 credits (60,594)Cullen LLR Ruby: Earned 2,000,000 credits (4,001,642)Generalized Cullen/Woodall LLR Bronze: Earned 10,000 credits (49,934)PPS LLR Turquoise: Earned 5,000,000 credits (7,641,286)PSP LLR Gold: Earned 500,000 credits (533,679)SR5 LLR Turquoise: Earned 5,000,000 credits (8,276,276)SGS LLR Ruby: Earned 2,000,000 credits (2,493,080)TRP LLR Silver: Earned 100,000 credits (337,932)Woodall LLR Gold: Earned 500,000 credits (800,782)321 Sieve Sapphire: Earned 20,000,000 credits (26,645,876)PPS Sieve Emerald: Earned 50,000,000 credits (50,490,838)TRP Sieve (suspended) Bronze: Earned 10,000 credits (14,657)AP 26/27 Sapphire: Earned 20,000,000 credits (28,507,193)GFN Emerald: Earned 50,000,000 credits (50,443,727)
Message 139107 - Posted: 24 Mar 2020 | 12:14:55 UTC - in response to Message 134333.

I don't really know what you are asking for here, but 100% load and 50°C?

What kind of cooling do you have on your 1080Ti?

I'm also running two 1080Ti, but I'm throttling them so they don't go above 80°C...

Profile mikey
Avatar
Send message
Joined: 17 Mar 09
Posts: 1241
ID: 37043
Credit: 515,585,034
RAC: 3,519
Discovered 1 mega prime321 LLR Ruby: Earned 2,000,000 credits (2,038,739)Cullen LLR Ruby: Earned 2,000,000 credits (2,074,615)ESP LLR Ruby: Earned 2,000,000 credits (2,013,823)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,142,353)PPS LLR Turquoise: Earned 5,000,000 credits (5,225,319)PSP LLR Ruby: Earned 2,000,000 credits (2,049,284)SoB LLR Ruby: Earned 2,000,000 credits (2,700,268)SR5 LLR Ruby: Earned 2,000,000 credits (2,053,250)SGS LLR Turquoise: Earned 5,000,000 credits (5,147,768)TRP LLR Ruby: Earned 2,000,000 credits (2,025,737)Woodall LLR Ruby: Earned 2,000,000 credits (2,014,811)321 Sieve Sapphire: Earned 20,000,000 credits (23,770,672)Cullen/Woodall Sieve (suspended) Gold: Earned 500,000 credits (944,431)Generalized Cullen/Woodall Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,813,253)PPS Sieve Double Silver: Earned 200,000,000 credits (339,665,412)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Ruby: Earned 2,000,000 credits (2,446,797)AP 26/27 Sapphire: Earned 20,000,000 credits (33,140,471)GFN Sapphire: Earned 20,000,000 credits (41,983,025)PSA Sapphire: Earned 20,000,000 credits (20,457,430)
Message 139117 - Posted: 24 Mar 2020 | 22:36:31 UTC - in response to Message 134333.

Could some who is knowledgeable with app_config.xml have a look and tell me if it looks ok. I used to run PPS seive only for the credits but now i want find primes and have started to use both the 1080ti to run all the CUDA appplications as well. just wondering if the times for processing genefer WU is about right. both gpus run at 100% load @ about 50C. the CPUs are using mt and run ok at 100% load @ 65c. This is my app_info.xml file sorry its pretty long. just under 3 hours for gen 20 and 10 hours for gen 21

TIA

bigsinky

<app_config>

<app>
<name>pps_sr2sieve</name>
<gpu_versions>
<gpu_usage>0.25</gpu_usage>
<cpu_usage>0.25</cpu_usage>
</gpu_versions>
</app>

<app>
<name>llr321</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llr321</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrCUL</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrCUL</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrESP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>

<app_name>llrESP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrGCW</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrGCW</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPSP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrPSP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPPS</name>
<max_concurrent>1</max_concurrent>
<fraction_done_exact>1</fraction_done_exact>
</app>
<app_version>
<app_name>llrPPS</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrPPSE</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrPPSE</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrMEGA</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrMEGA</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrSOB</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrSOB</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrSR5</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrSR5</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrTPS</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>llrTPS</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
<max_ncpus>2</max_ncpus>
</app_version>

<app>
<name>llrTRP</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrTRP</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>llrWOO</name>
<fraction_done_exact>1</fraction_done_exact>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>llrWOO</app_name>
<plan_class>mt</plan_class>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>

<app>
<name>ap26</name>
</app>
<app_version>
<app_name>ap26</app_name>
<plan_class>OCL_cuda_AP27</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer_extreme</name>
</app>
<app_version>
<app_name>genefer_extreme</app_name>
<plan_class>OCLcudaGFNEXTREME</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer_wr</name>
</app>
<app_version>
<app_name>genefer_wr</app_name>
<plan_class>OCLcudaGFNWR</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer15</name>
</app>
<app_version>
<app_name>genefer15</app_name>
<plan_class>OCLcudaGFN15</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer16</name>
</app>
<app_version>
<app_name>genefer16</app_name>
<plan_class>OCLcudaGFN16</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer17low</name>
</app>
<app_version>
<app_name>genefer17low</app_name>
<plan_class>OCLcudaGFN17LOW</plan_class>
<cmdline>-compute</cmdline>
</app_version>


<app>
<name>genefer17mega</name>
</app>
<app_version>
<app_name>genefer17mega</app_name>
<plan_class>OCLcudaGFN17MEGA</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer18</name>
</app>
<app_version>
<app_name>genefer18</app_name>
<plan_class>OCLcudaGFN18</plan_class>
<cmdline>-compute</cmdline>
</app_version>

<app>
<name>genefer19</name>
</app>
<app_version>
<app_name>genefer19</app_name>
<plan_class>OCLcudaGFN19</plan_class>
<cmdline>-compute</cmdline>
</app_version>

</app_config>


You can do website settings for all the LLR tasks now so that would trim your file a bit.

If you want to find primes set your resource share to zero so you don't cache any workunits at all, or very very few, that means wu's waiting on you to crunch them are not sitting on your pc but on the PG Server not counting against you. Nothing can fix the randomness of who gets wu's when but having a 2 or more day cache definately counts against you, ie if my pc asks for a wu 2 hours before your pc asks for the same kind that will put you 2 hours behind. BUT if your pc is faster than mine you could still be 1st.

Post to thread

Message boards : Problems and Help : Running dual 1080ti and 7920X

[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2021 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 0.06, 0.01, 0.00
Generated 29 Jul 2021 | 11:00:46 UTC