Author |
Message |
|
why are we sieving?as far as i know this has already been sieved up to 5200 trillion?also ive done some work on this,ive done 2000-3000 tests since the original project stopped. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
why are we sieving?as far as i know this has already been sieved up to 5200 trillion?
The primary reason is that this is PrimeGrid's first venture into the Riesel problem. Prior to this, we did not have a sieve file. This one was started from scratch for the 64 k's remaining for n<50M. We have a complete history with this file and have a record of all factors to our current sieve depth.
We are aware that there exists a sieve file from a previous effort. However, that project ended quite abruptly and we are not privy to the final bounds reached in the sieve file. Also, access to the file was never granted to PrimeGrid, and the file only went to n=20M.
also ive done some work on this,ive done 2000-3000 tests since the original project stopped.
For individuals who have completed previous work on the Riesel problem, we'll gladly accept the following:
Factor files: While this will have minimal impact on the current sieving effort, it will immediately help out with primality testing by reducing the number of candidates to test. All factors will be verified before being accepted.
Residue files: This will directly impact the double check effort by allowing PrimeGrid to send out only one test to match residues. Of course, if residues do not match, another test will be sent out and we're no worse off than if we didn't have the residue in the first place.
Please submit to sieve at primegrid dot com.
Thank you in advance for this consideration. Heck, there may even be some cobblestones in exchange. ;)
____________
|
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 915 ID: 3110 Credit: 183,164,814 RAC: 0
                        
|
In the interest of broadening the sieve discussion, here's the MersenneForum.org Riesel Sieve discussion thread I just happened upon.
____________
|
|
|
|
I'm sending in riesel_564.dat dated 3/24/08. And we're going to see if we can find any stray residues laying about. =)
____________
FUCK THIS PROJECT AND THEIR COMMIE BULLSHIT POLITICS |
|
|
|
I can probably get most of the residuals. Will let you know ASAP. |
|
|
|
do you think it is wise to sieve such a large range?sieving to 20M would be more than enough for a long time.hundreds of thousands of tests wont be done because once a prime is found that K is removed.in my opinion such a large sieve is extremely inefficient. |
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
Remember that sieving is not a linear process:
It doesn't take 50M/20M=2.5 times longer to sieve 0-50M than 0-20M, it takes only sqrt(50M/20M)=1.6 times longer.
Eliminating one sequence from a sieve with 66 sequences by finding a prime doesn't reduce the sieve time by 1/66= 1.5%, it only reduces it by 1-sqrt(65/66)= 0.76%.
Also, a 0-50M sieve could (eventually) be combined with the existing PSP/SoB sieve over the same range, which would lead to extra efficiencies for both projects.
I can't say that a 0-50M sieve is the best way to go because I haven't worked out all the trade-offs involved, and the calculations above are just simplifications anyway, but it is not immediately obvious to me that a 0-50M sieve is less efficient in the long run than sieving 0-20M first followed by a separate 20-50M sieve later. |
|
|
|
I think Primegrid's decision to start sieving from scratch is wise, but I think that gd_barnes' post over on the Mersenne forum presents an excellent argument for only sieving to 20M at this time...
http://www.mersenneforum.org/showthread.php?t=10686#76
Enough k's will be eliminated below 20M that sieving all 64 k's to 50M now is a waste of time and resources and by the time we get around to needing to sieve above 20M (many years from now), computer speeds will have increased enough and enough k's will have been eliminated that sieving this higher range will be much more efficient than it is now.
In any case, it's nice to see this project revived. I'll be doing my (small) part for the "parade" next week :) |
|
|
|
I'm inclined to agree..
i'd rather sieve 20M to 2P then 50M to 1P.
as an aside.. if there was a list of the previously sieved number (the old dat)
but you don't want to use because you don't have the actual factors..(fair enough)
is there some way to get the list of 'removed' tests.. and then quickly determine what the factor was.. ?
i assume it would have to be below some maximum bound (5200 trillion was it)
or is it quicker to keep sieving?
|
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 915 ID: 3110 Credit: 183,164,814 RAC: 0
                        
|
Enough k's will be eliminated below 20M that sieving all 64 k's to 50M now is a waste of time and resources and by the time we get around to needing to sieve above 20M (many years from now), computer speeds will have increased enough and enough k's will have been eliminated that sieving this higher range will be much more efficient than it is now. You know, I don't think that's quite right. It's true that sieving will be faster on future computers; but presumably so will LLRing. Proportionally they should be about the same. So if we expect to search most of these ranges to 50M, there's no reason not to do the sieving now.
In post 67 of that thread, there's a graph speculating that 40 or so K's will be left by 20M. Sieving only 40 K's instead of 64, going by Geoff's estimate, would take only 20% less CPU time. So it seems like 50M might be worth it now after all.
____________
|
|
|
|
I've just sent you links to files containing the RS residuals, as well as the remaining K/N pairs without factors. Hope these help. |
|
|
|
- exit code -1073741819 (0xc0000005)
http://www.primegrid.com/result.php?resultid=160270916
http://www.primegrid.com/result.php?resultid=160276471
- exit code -529697949 (0xe06d7363)
http://www.primegrid.com/result.php?resultid=160280502
http://www.primegrid.com/result.php?resultid=160280500
http://www.primegrid.com/result.php?resultid=160278269
http://www.primegrid.com/result.php?resultid=161107474
http://www.primegrid.com/result.php?resultid=160281031
http://www.primegrid.com/result.php?resultid=159962720
http://www.primegrid.com/result.php?resultid=160275404
http://www.primegrid.com/result.php?resultid=160275798
http://www.primegrid.com/result.php?resultid=160280677
- exit code -148 (0xffffff6c)
http://www.primegrid.com/result.php?resultid=159962480
-197 (0xffffffffffffff3b)
http://www.primegrid.com/result.php?resultid=159975983
????????????????????????????
____________
Ukraine Distributed Computing
|
|
|
|
You can also see the sieve reservation status @ http://dc.rieselsieve.com/sieve.php?func=archive
Regards
C.
____________
|
|
|
|
I just took a look at my DualCore wich is currently sieving for TRP, it appears that there is something like 10K sec/factor or more. Based on the fact that my computer could do an n=1M test in about 1 hour, there shouldn't really be a problem start testing from n=1 to n=1M, and most preferably also remove those from the sieve file, since they will have matching residuals very soon, so there really is no effeciency gained on keeping the tested LLR range in the sieve file.
So how does it look with a transision to doing some LLR testing on TRP, at least for the machines much more suited for this? Also, since we are sieving only at the 10% depth of what RS managed to get to, and it already takes my dual core at least 10K sec/factor, I can only be dreaded by thinking about how far the RS project could have LLR tested before further sieving was actually nescessary :(
To sum up:
Do you have a plan for LLR testing?
Will manual LLR testing and or sieving be possible for offline computers?
Do you intent to remove the LLR ranges already tested from the sievefile?
Now last but not least, thanks for finally taking over this conjecture, and getting it going again. In case you haven't noticed, people are starting to guess weather or not that you are going to overtake or planning to overtake the R5 and S5 conjectures, so can anyone official come with a statement on PGs idea/feelings about this guessing?
Regards
KEP |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Do you have a plan for LLR testing?
Yes
Will manual LLR testing and or sieving be possible for offline computers?
Not at this time. However, there's plenty of other manual work available in the PSA.
Do you intent to remove the LLR ranges already tested from the sievefile?
Not at this time.
As for SR5, we are unaware of any pending collaboration at this time. However, informally, Lennart hosted a PRPNet port for them. In the near future, we'll reserve another range from SR5 and open it for PG users to test. It's a project that would make a good fit here at PrimeGrid. :)
____________
|
|
|
|
OK thanks John for your replys. I quess I could just get my Quad behind some internetlines then :)
It would be nice to see some serious work being done on the RS5 conjecture, with a fully updated software package, so I'll be looking forward to see what you can come up with on that matter, in the future :D
KEP |
|
|
|
NEW WU take 6 hours to completebut give same 280 point
as old Wu that takes only 3 to complete
NEW WU mast have biger ponints
____________
Ukraine Distributed Computing
|
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 915 ID: 3110 Credit: 183,164,814 RAC: 0
                        
|
Have you seen any of these new WUs actually take 6 hours, or is that just the projection? Estimates were changed recently, but that doesn't affect actual runtime.
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
NEW WU take 6 hours to completebut give same 280 point
as old Wu that takes only 3 to complete
The WU's are still the same range since the beginning so I don't know why you are experiencing different run times.
____________
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2165 ID: 1178 Credit: 8,777,295,508 RAC: 0
                                     
|
NEW WU take 6 hours to completebut give same 280 point
as old Wu that takes only 3 to complete
The WU's are still the same range since the beginning so I don't know why you are experiencing different run times.
Perhaps the LLR units are being mistaken for the sieves? On one's account page they are easy to mix up, and the longer ones do run about twice as long as the sieves.
____________
141941*2^4299438-1 is prime!
|
|
|
|
Are we going to get a
http://www.primegrid.com/stats_trp_sieve.php
page?
The trp llr has it's page, and all the other sieves and llr projects have something. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
TRP Sieve Update
A major milestone was achieved today. p=1P was surpassed!!! As soon as 400K factors have been found (which should be in about a week), a new sieve file will be released. This will bring the current sieve file to within 3% of the previous effort. However, the last 3% will take quite some time to complete as factors will be much harder to come by the deeper the sieve.
Remember, this is an n<50M file for 64 k's which makes 1P even more remarkable. Thank you to everyone who answered the call for help and contributed towards this amazing milestone. The "High Priority" classification will now be removed.
Now let's "round out" this achievement by finding a few TRP primes. :)
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
TRP Sieve Update
There has been a delay in creating the new TRP sieve file. Originally, we were waiting for 400K factors. That now has been increased to 500K factors.
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
TRP Sieve Update
500K factors has been reached. When the current buffer empties, new work work go out along with a new sieve file.
____________
|
|
|
|
how far has the file been sieved to now,or is there somewhere that i can see how far? |
|
|
|
~ 1.7P
Lennart
____________
|
|
|
|
Have you considered increasing the value for "total tasks" for the TRP sieve WUs? I suspect quite a lot are stuck like this one: http://www.primegrid.com/workunit.php?wuid=118514803 |
|
|
|
How large is the sieve file now after challenge and why no riesel prime was found yet?
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
How large is the sieve file now after challenge and why no riesel prime was found yet?
We haven't created a new sieve file yet. As for no primes so far, they must be hiding very well. ;) Our completed search so far is about 3.0M to 3.2M so it's still quite reasonable not to have a prime yet.
EDIT: The two lowest weight k's have been taken to 5M. Currently k=2293 is being accelerated to 5M. The remaining k's continue to advance.
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
A new sieve file has been prepared. It should be released in the next few days. Here are some stats for n<50M:
Previous TRP_20100604 file
64 sequences and 9,669,856 candidates
New TRP_20101125 file
368,327 candidates removed by factors since 20100604
169,773 candidates removed by prime
63 sequences and 9,131,756 candidates remain
____________
|
|
|
|
only 170,000?
at first glance that almost paradoxically seems unworthwhile..
if we had spent all the LLR time on that k on seiving perhaps we woulds have removed more candidates..
but then i went 9M / 63 = 145,000
and so that prime removed more candidates then average.. :)
ps. yea i realized that finding eth prime removes it from all furture sieve files too, and is in fact teh goal of the entire project.
i just thought it was odd.
|
|
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 662 ID: 845 Credit: 2,220,370,221 RAC: 0
                        
|
at first glance that almost paradoxically seems unworthwhile..
if we had spent all the LLR time on that k on seiving perhaps we woulds have removed more candidates..
At first, I had the opposite feeling, since I guess there's more sieving work done than LLR work. Sieving for half a year just removed 370k factors; with LLRing all the time we might have found two more primes. :D
____________
|
|
|
|
Whats the status now after the challenge?
Future plans to sieve on GPU? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Whats the status now after the challenge?
Exactly the same as before the challenge.
Future plans to sieve on GPU?
Not that I'm aware of.
____________
My lucky number is 75898524288+1 |
|
|
|
Could someone please clarify the link between the current sieving we're doing (for TRP) and the llr work we're doing now and will be doing in the near future (for TRP)? In this case 'near' can be interpreted liberally, lets say <=2 years.
And while we're at it, a general status update about the sieving would be greatly appreciated as the last one was in December 2010. :)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Could someone please clarify the link between the current sieving we're doing (for TRP) and the llr work we're doing now and will be doing in the near future (for TRP)? In this case 'near' can be interpreted liberally, lets say <=2 years.
And while we're at it, a general status update about the sieving would be greatly appreciated as the last one was in December 2010. :)
BIG disclaimer: I may not know what I'm talking about, so take all of this with a big grain of salt. At worst, this will prompt someone smarter to come in and say, "No, that's wrong."
I'm not sure I understand your question, other than the request for general status (for which I don't know the answer.) The sieve work being done now applies to the entire LLR range for the foreseeable future. (The only thing that changes is that when a Riesel prime is found, we no longer need to sieve for that K anymore.)
Looking at the range status page, you can see that we're actively sieving to a depth of 19P to 21P. That has no relationship to the N being searched by the current TRP LLR work, however. The sieve covers the entire range that LLR will ever work on. Looking at the database, it looks like we're sieving for N<50 million. TPS LLR is currently in the 5 million range, except for two K which have been LLR'd to N=10 million. Considering that the WU length increases dramatically with rising N, the 50M limit on the current sieve will last a long, long time. (SoB currently is at N=20 million, and look how long those WUs are.) The depth of the sieve (the 21P number) is a measure of how much sieving has been done, and doesn't really have any relationship to the numbers being crunched with LLR.
When you do a sieve, it's good practice to set the range to be much larger than you ever expect to LLR. It often doesn't change the speed of the sieve to do this, and it means you'll never need to re-sieve. For example, with GFN, we sieve to b=100 million, even though GeneferCUDA can't even go up to 1 million, and even genefer80 can only get to 20 million or so.
The point at which you stop a sieve is, essentially, when the rate of factors being found is low enough such that it would be faster to LLR all the remaining candidates rather than do any more sieving.
The TRP Sieve is a little different from some other sieves because it only needs to seive a discreet (and ever shrinking) set of K values.
I'm not sure if that answers your question or not.
____________
My lucky number is 75898524288+1 |
|
|
|
Could someone please clarify the link between the current sieving we're doing (for TRP) and the llr work we're doing now and will be doing in the near future (for TRP)? In this case 'near' can be interpreted liberally, lets say <=2 years.
And while we're at it, a general status update about the sieving would be greatly appreciated as the last one was in December 2010. :)
BIG disclaimer: I may not know what I'm talking about, so take all of this with a big grain of salt. At worst, this will prompt someone smarter to come in and say, "No, that's wrong."
I don't know what I'm talking about either, that's why I'm asking questions :P
I'm not sure I understand your question, other than the request for general status (for which I don't know the answer.) The sieve work being done now applies to the entire LLR range for the foreseeable future. (The only thing that changes is that when a Riesel prime is found, we no longer need to sieve for that K anymore.)
Looking at the range status page, you can see that we're actively sieving to a depth of 19P to 21P. That has no relationship to the N being searched by the current TRP LLR work, however. The sieve covers the entire range that LLR will ever work on. Looking at the database, it looks like we're sieving for N<50 million. TPS LLR is currently in the 5 million range, except for two K which have been LLR'd to N=10 million. Considering that the WU length increases dramatically with rising N, the 50M limit on the current sieve will last a long, long time. (SoB currently is at N=20 million, and look how long those WUs are.) The depth of the sieve (the 21P number) is a measure of how much sieving has been done, and doesn't really have any relationship to the numbers being crunched with LLR.
Ok, thanks for clearing that bit up. Also realised there's a complete sieving subforum available and found some explanations there of what the heck is going on. The conclusion I've drawn from the sieve information thread is that the 19P number basically means that we have checked if numbers up to 19P are a factor for a number of the in the entire LLR range. Or did I misunderstand that?
When you do a sieve, it's good practice to set the range to be much larger than you ever expect to LLR. It often doesn't change the speed of the sieve to do this, and it means you'll never need to re-sieve. For example, with GFN, we sieve to b=100 million, even though GeneferCUDA can't even go up to 1 million, and even genefer80 can only get to 20 million or so.
The point at which you stop a sieve is, essentially, when the rate of factors being found is low enough such that it would be faster to LLR all the remaining candidates rather than do any more sieving.
Yes, this is what I was wondering about too because if my guestimation is right (and I've been well known to be dead wrong!) that would be somewhen right about now. My guestimation is based of course on some assumptions and I would love to be told that I'm wrong and have explained why that is :)
This is what I was thinking:
Assumption 1: average times for TRP Sieve and TRP LLR on settings page are ballpark correct.
Assumption 2: my factor finding percentage is close to project wide percentage.
Assumption 3: removing 1 factor equals removing 1 LLR task.
Average time for a sieve: 4 hours 10 minutes
Average time for a LLR: 17 hours 23 minutes
Factor finding stats: I found 237 factors in 2117 tasks, or 0.112% of tasks give a factor i.e. every 8.93 tasks yield a factor
This would mean every 8.93 sieve units would remove 1 LLR task, right? With a bit of rounding here and there you could say 9 (sieves) * 4 hours (per sieve) = 36 hours to remove one task with a sieve. Taking into account that an LLR task needs to be double checked, that would take about 35 hours of computing time to remove one task.
But of course as you have also pointed out LLR task size steadily increases with increasing the N. So that makes me wonder how task duration increase differs between the sieve and llr. Under the assumption my previous logic is sound we are currently close to an equilibrium, which means that the time required to eliminate one task by LLR would need to grow quicker than the time it takes to sieve one out for the sieve to be of use... Do you know if that is the case?
The TRP Sieve is a little different from some other sieves because it only needs to seive a discreet (and ever shrinking) set of K values.
I'm not sure if that answers your question or not.
Your answer combined with the info on sieving did answer the question, so thanks :)
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Everything you said says reasonable and correct. I'm not sure who, if anyone, if keeping tabs on the optimal sieving range for TRP.
____________
My lucky number is 75898524288+1 |
|
|
|
I know I am doing a comparison that might be a little off, but if we look at the sieves in general.
When PSP was stopped it was getting less than .0017 factors per task.
321 was stopped when getting lessn than .024.
So percentage wise it seems to be doing very well. Again we are sieving a a lot larger level than we are on LLR. It if takes 4+ hours to LLR at 10M, what will 19M take? Look at some of the projects with higher values and the times they take.
A good test would be to run a few test LLR in the 19M range and then see where your speeds fit with the sieve. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
When PSP was stopped it was getting less than .0017 factors per task.
321 was stopped when getting lessn than .024.
That's a good analysis, except that with sieves, the WUs are of arbitrary length. With LLR, the size of the WU is dictated by the number being analyzed; you can't decide to crunch half of a number. With sieves, the amount of sieving that's done is decided by us and can be set to anything. We could make each WU take 5 seconds or 5 years.
Therefore, a number such as "X factors per task" is completely meaningless. What would be meaningful is "X factors per hour".
In fact, the sieve WUs are not the same size amongst the different sieves, and they have changed over time. Your statistics do indeed tell you "X factors per task", but it's a completely meaningless number since the size of the tasks (especially for the PPS sieve) have varied over time. In the case of PPS, I'm pretty sure the tasks are currently at least 10 times (or more) larger than they used to be due to the advent of GPU sieving.
____________
My lucky number is 75898524288+1 |
|
|
|
In fact, the sieve WUs are not the same size amongst the different sieves, and they have changed over time. Your statistics do indeed tell you "X factors per task", but it's a completely meaningless number since the size of the tasks (especially for the PPS sieve) have varied over time. In the case of PPS, I'm pretty sure the tasks are currently at least 10 times (or more) larger than they used to be due to the advent of GPU sieving.
Not to mention the fact that PPS Sieve no longer uses a sieve file since we switched from sr2sieve to ppsieve, resulting in many factors being found multiple times by different WUs i.e. the reported factor rate is artificially high.
I will speak with some folk from the old Riesel Sieve project and se if we can figure out a good target depth for the sieve.
Cheers
- Iain
____________
Twitter: IainBethune
Proud member of team "Aggie The Pew". Go Aggie!
3073428256125*2^1290000-1 is Prime! |
|
|
|
OK guys here is a "Worst case scenario" of the optimal sievedepth:
In the sieve range from p=18P to p=19P it took
632,012,000 seconds to find 11,433 factors, this gives a removal rate of: 55,279.629 seconds per factored candidate
A test at nMax for:
k=2293 at n=49,999,799 takes 1,819,492.68561 sec/LLRtest and
k=502573 at n=49,999,979 takes 2,710,048.861779 sec/LLRtest
this gives a total average testing time (including doublecheck of each WU) of:
4,529,541.547389 sec/candidate n.
This means that your optimal sievedepth is:
(4,529,541.547389/55,276.629)*19P=1,556.835509 Peta
or half if you only sieve for optimal first pass primality testing.
If you apply the general advice of calculating optimal sievedepth, used at Conjectures'R'US (CRUS) then your optimal sievedepth is:
1,089.784856 Peta's or 70% of your previously calculated sievedepth. The rule of sieving only to 70% is generally used, to compensate for k's not having to be tested to nMax.
Hope this was usefull to anyone.
Take care
KEP |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Thanks, Kep.
Bottom line is we're currently many years away from optimal sieve depth. I'll change the project life-expectancy post accordingly.
____________
My lucky number is 75898524288+1 |
|
|
|
Hope this was usefull to anyone.
Yes it was, and we clearly have plenty more work to do! Sounds like a nice project for a GPU sieve :)
____________
Twitter: IainBethune
Proud member of team "Aggie The Pew". Go Aggie!
3073428256125*2^1290000-1 is Prime! |
|
|
|
Yes it was, and we clearly have plenty more work to do! Sounds like a nice project for a GPU sieve :)
Indeed, seeing the 100 year lifespan of this project using a GPU is indeed an interesting option to explore. I came across this Java GPU-compiler: https://github.com/pcpratts/rootbeer1
Although I'm sure native cuda code is better than ported java, there's many more people that now java. So it might be interesting to check that out for someone who does know Java.
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Way off topic... but the native language for both GPU platforms is C/C++, and there's no shortage of people who know those languages. It's not the language that's challenging. You need to design your algorithm and code very differently for a parallel processor. That's the hard part.
A java/CUDA translator is kind of like a spanish/english dictionary for a Spaniard going to a U.S. medical school. That's the (trivially) easy part, especially since Java and C++ are very similar languages. A better analogy might be an English(US)/English(GB) dictionary. :)
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks, Kep.
You're welcome :)
Now could I ask of you, to please distribute a new sievefile, where k=162941 and k=252191 is removed. I seems to keep getting an outdated sievefile containing 57 k's in stead of 55 current k's. The sievefile is named like this: "TRP_20110626.sieveinput"
Even though I know how to remove the 2 k's myself (using srfile) and replace the sievefile, I'm not sure that it is generally acccepted even though it would remove ~280000 candidates and give a little overall speed increase :)
Take care
KEP |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
Not my area, but I'll make sure TPTB know about it.
____________
My lucky number is 75898524288+1 |
|
|
|
Way off topic... but the native language for both GPU platforms is C/C++, and there's no shortage of people who know those languages. It's not the language that's challenging. You need to design your algorithm and code very differently for a parallel processor. That's the hard part.
A java/CUDA translator is kind of like a spanish/english dictionary for a Spaniard going to a U.S. medical school. That's the (trivially) easy part, especially since Java and C++ are very similar languages. A better analogy might be an English(US)/English(GB) dictionary. :)
Ah okay. Thanks for explaining. This once again is a nice example of (as the Dutch say) me hearing the clock ring but not knowing where the clapper is hanging.
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Way off topic... but the native language for both GPU platforms is C/C++, and there's no shortage of people who know those languages. It's not the language that's challenging. You need to design your algorithm and code very differently for a parallel processor. That's the hard part.
A java/CUDA translator is kind of like a spanish/english dictionary for a Spaniard going to a U.S. medical school. That's the (trivially) easy part, especially since Java and C++ are very similar languages. A better analogy might be an English(US)/English(GB) dictionary. :)
Ah okay. Thanks for explaining. This once again is a nice example of (as the Dutch say) me hearing the clock ring but not knowing where the clapper is hanging.
It would be nice however if someone did develop a gpu app for this even if it's not the most efficient version.
____________
@AggieThePew
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13513 ID: 53948 Credit: 237,712,514 RAC: 0
                           
|
It would be nice however if someone did develop a gpu app for this...
I don't think anyone would disagree with that. :)
...even if it's not the most efficient version.
The devil is in the details. We're used to some awesome successes with GPU apps that absolutely blow their CPU equivalents away.
Those are the successes.
You don't see the failures because they aren't worth using. An inefficient GPU app generally uses an entire GPU PLUS an entire CPU core and runs SLOWER than the CPU version of the app.
Writing GPU apps is not trivial. The methodology is very different than that of a typical traditional computer program, and only some problem are well suited for being solved by a massively parallel processor. Some problems can't be solved efficiently in this manner.
That being said, I suspect the TRP sieve IS one of those problems that can be solved efficiently in parallel, but that's merely because this seems to be the case for other sieves. I've never looked at the code for the TRP sieve, and am not familiar with the algorithm.
My point, however, is that a GPU is not the magic bullet solution to every computing problem. There are many GPU projects that never made it into production.
____________
My lucky number is 75898524288+1 |
|
|
|
My point, however, is that a GPU is not the magic bullet solution to every computing problem. There are many GPU projects that never made it into production.
LOL you just burst the bubble :)
____________
@AggieThePew
|
|
|
|
Thanks, Kep.
You're welcome :)
Now could I ask of you, to please distribute a new sievefile, where k=162941 and k=252191 is removed. I seems to keep getting an outdated sievefile containing 57 k's in stead of 55 current k's. The sievefile is named like this: "TRP_20110626.sieveinput"
Even though I know how to remove the 2 k's myself (using srfile) and replace the sievefile, I'm not sure that it is generally acccepted even though it would remove ~280000 candidates and give a little overall speed increase :)
Take care
KEP
Almost two months on and my system still has the outdated input file. Can someone PLEASE stop us from wasting CPU time on sieving that doesn't need to be done?
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Thanks, Kep.
You're welcome :)
Now could I ask of you, to please distribute a new sievefile, where k=162941 and k=252191 is removed. I seems to keep getting an outdated sievefile containing 57 k's in stead of 55 current k's. The sievefile is named like this: "TRP_20110626.sieveinput"
Even though I know how to remove the 2 k's myself (using srfile) and replace the sievefile, I'm not sure that it is generally acccepted even though it would remove ~280000 candidates and give a little overall speed increase :)
Take care
KEP
Almost two months on and my system still has the outdated input file. Can someone PLEASE stop us from wasting CPU time on sieving that doesn't need to be done?
Sorry ! :)
I am right now uploading a new sievefile removed all factors & the last 2 prime k found removed
The sievefile range is to n=50M
Depending on how many primes we will find before 50M
Lets say that we sieve for 25M LLR time and you have some times.
Don't forget the DC :)
Or you can run a SoB LLR and see how many factors you can find in the same time.
Lennart
PS: Don't expect any big change in time. |
|
|
|
Errr, should the filename and size still be the same?
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
|
Errr, should the filename and size still be the same?
No. TRP_20121003........................
Lennart
EDIT: I forgot to say that all 5000 wu in buffer needs to be done with the old file first. |
|
|
|
EDIT: I forgot to say that all 5000 wu in buffer needs to be done with the old file first.
Okay, I'm in. Just threw two i7's at this at least until the challenge, and then will return afterwards. It's wicked hot here and they could use a break from LLR anyway :-) It was my "pet project" for a long time, so why not. Still a long way to go to ruby though... Unscored 2-day mini-challenge, anyone?
--Gary |
|
|
|
Well, I ran about 160+ TRP Sieve tasks over the last 2+ days; partially shut down during that time. Not sure if anyone else joined in, but I hope we made a dent in the buffer for the new sieve file. It's hard to tell because the WU's seem to get constantly replenished to the 5000 count (according to the count on the home page).
Good luck to everyone on the upcoming challenge.
--Gary |
|
|
|
What is the 'n' being sieved in TRP-Sieve right now?
I've being seeing lately roughly one sieve factor found a day by me. I am crunching around 30 tasks a day. 30 tasks on my laptop will take around 7 days on a core. It is roughly the time it takes to crunch a current PSP task that is around 14 million 'n'.
Are we sieving numbers that high that it's worth the sieve?
Thank you in advance! |
|
|
|
The sieve is working to eliminate any value of n<50,000,000 for all remaining k's. As Michael said a while back in this thread - the depth of the sieve (26-27P) has no relationship with the numbers being tested with LLR.
As an example the last factor you found could be for n around 7m, in which case we've been "unlucky" as it may have been quicker just to LLR. On the other hand you may have found for a factor for n around 49m, so your 7 days of sieving could have saved many weeks of primality testing that candidate with LLR.
KEP provided a useful analysis of the optimal sieve depth last year proving it is worthwhile to keep sieving!
OK guys here is a "Worst case scenario" of the optimal sievedepth:
In the sieve range from p=18P to p=19P it took
632,012,000 seconds to find 11,433 factors, this gives a removal rate of: 55,279.629 seconds per factored candidate
A test at nMax for:
k=2293 at n=49,999,799 takes 1,819,492.68561 sec/LLRtest and
k=502573 at n=49,999,979 takes 2,710,048.861779 sec/LLRtest
this gives a total average testing time (including doublecheck of each WU) of:
4,529,541.547389 sec/candidate n.
This means that your optimal sievedepth is:
(4,529,541.547389/55,276.629)*19P=1,556.835509 Peta
or half if you only sieve for optimal first pass primality testing.
If you apply the general advice of calculating optimal sievedepth, used at Conjectures'R'US (CRUS) then your optimal sievedepth is:
1,089.784856 Peta's or 70% of your previously calculated sievedepth. The rule of sieving only to 70% is generally used, to compensate for k's not having to be tested to nMax.
Hope this was usefull to anyone.
Take care
KEP
|
|
|
|
Thank you rob.
I wish I knew how the sieving process works. |
|
|
|
There is some info on how sieving works on the PG Wiki
http://primegrid.wikia.com/wiki/Fixed-K_sieve |
|
|
|
i7-2600K @ 4500MHz:
4-core: perf=4*9350MIPS, power=82.6W, time=5270s, ppd=546.10x64=~35K (HT off)
8-core: perf=8*5520MIPS, power=96.4W, time=8940s, ppd=546.10x76=~42K (HT on) |
|
|