Q.: What do I have to keep in mind when overclocking my GPU for the GFN project?
A.: There is no general answer that will apply to all users/systems/configurations, but here are some general guidelines:
1. As a rule of thumb, try to keep the card under 80Â°C. It has been observed that once the 80Â° range is exceeded, there might be stability issues.
2. If you are running on a laptop, don't overclock the GPU. It is generally not a good idea to run the most computationally intensive tasks on a laptop, anyway.
3. Go up in small increments when overclocking. For the most part, the shader and core clocks are linked for GFN work (the exception is for GTX 2xx cards and related Quadro/Tesla cards built on these chips). When you find a clock that is unstable, back off a couple of overclock steps and you should be fine (assuming #1 above isn't an issue).
4. If you are getting errors, lower the clocks on the GPU and consider running the GPU at stock clocks for GFN. (A maxErr exceeded error indicates an error related to overclocking).
5. The GFN project appears to be particularly sensitive to memory overclocks. These don't gain you greatly in overall speed increases anyway, so I wouldn't recommend playing with them at all. Indeed, if you are not overheating and haven't done much to the shader clocks but are still getting stability issues, you might consider downclocking the memory (similarly to the workaround for the GTX 550 Ti cards).
6. Remember that overclocking necessarily uses more power. This in turn puts more stress on your power supply, thereby increasing overall heat. This leads to several possible issues including A) over stressing the power supply resulting in shortened PS life span, possible shorts in other parts of your system such as the motherboard, and non-GPU instability; B) hotter running GPUs and CPUs (I have reduced heat in some systems just by installing a more powerful/efficient PS); and C) problems with GPUs that do not have extra power connectors (i.e., the PCIe slot is limited to 75W...overclocking some cards without external power can exceed this limit, and running at the limit can produce instability in some systems).
7. That GT530 you bought is never going to be a GTX 560 Ti (or fill in whatever card comparison you like). That is, you are not going to make a mid-range card out of an entry-level card nor are you going to make a top-end one starting with a mid-range. You may find that you can take a card from a particular series and overclock it successfully to perform at or near the stock clocked card from the next higher series (e.g., I have my wife's superclocked EVGA GTX 550 Ti performing about as well as a stock clocked GTX 460), but you are not going to be able to do better than that 99% of the time (*note: there may be rare exceptions). If you really want a top-end card, buy one...you aren't going to overclock your way there with something else.
Q.: How do I tell the Boinc client to run GeneferCUDA on the correct GPU on a multiple GPU system?
A.: If you are running BOINC client 6.13.x or higher, you can use this construct in cc_config.xml to tell the client not to use a specific GPU for geneferCUDA:
Don't use the given GPU for the given project.
If <device_num> is not specified, exclude all GPUs of the given type.
<type> is required if your computer has more than one type of GPU; otherwise it can be omitted.
<app> specifies the short name of an application
(i.e. the <name> element within the <app> element in client_state.xml).
If specified, only tasks for that app are excluded.
You may include multiple <exclude_gpu> elements. New in 6.13
Q.: Does a positive hit with Genefer prove that the tested number is prime?
A.: The test performed by Genefer is a PRP test, so a positive hit with Genefer doesn't prove that the tested number is prime, it only verifies it is a PRP. That means a deterministic primality test still has to be performed on the number to prove it prime. However, on the large GFN tasks, the chances of a pseudoprime reported as prime are quite low.
Q.: Why does my GPU task run with Genefer appear to slow down?
A.: The cause of this could be a screensaver, which forces the GPU to render screensaver graphics rather than crunching the task. Also make sure that the computer does not go to sleep when you don't use it.
Q.: How can I tell if my GPU is double precision?
A.: Every CUDA-enabled card has a "Compute Capability" (or CC) level. All cards with CC 1.3 or higher support double precision. The CC of a specific card is listed in the specifications for that card.
Q.: Why is my GFN WU progressing so slowly?
A.: GFN tasks run very slowly on some cards. For example, the 600-series Nvidia cards are really bad at double-precision arithmetic which genefer performs. The 600 are much better at doing single-precision arithmetic and such would me more suited for a project like PPS Sieve.
Q.: When using standard preferences (not app_info), how do I avoid getting GFN tasks for the CPU and only get GPU tasks instead?
A.: You can try the following:
1. On prefs page, uncheck "use nvidia gpu" and "use ATI GPU", and select only llr subprojects (the ones you want to run)
2. Open boinc and get a couple wu's. It may start to say "no selected work available". I so, do manual updates until you get what you want (it may take a few minutes)
3. After getting llr work, go back to prefs page, select GFN GPU and check the "use nvidia gpu" pane (leave only the "use ati gpu" unchecked).
4. Update boinc (you should get only gpu tasks this time).
After that, boinc only requests the selected work.
Q.: How can I prevent computation errors with GeneferCUDA?
A.: Please see this post.