Mr.Pernod píše:If you happen to know which processor(s) they use as reference, I can try to locate one and compare numbers, no problem.
It is a hypothetic machine. I do not think one like that exists. I think they simply run a reference unit on some machine(s) and then recalculate its CPU time value accordingly. The value is then passed with every WU, but the real value can vary, depending on the character of the unit. It is really valid for the reference WU only.
Mr.Pernod píše:Note about the NNW with BoincManager 5.2.13 and core client 5.3.11, BoincManager does not display a message in the messagetab when toggling the function, but it does work.
What is NNW? No messages at all with that manager? Well, I advice installing it over a matching BOINC version, so that may fix the problem. BOINC dev team changed the RPC ports lately, so unmatching versions of the manager and the core may indeed fail to communicate.
Mr.Pernod píše:No New Work / Allow New Work button. This function does not give feedback with this particular combination, but the actual functionality is intact.
Yes, I saw the official version had some changes in the messaging. Some messages were removed, but I have no idea why. This might be well one of them. Maybe it was just a mistake, but I did not feel like not synchronizing with all the official modifications.
didn't stop the client, just ran the benchmark a few times.
I just restarted BOINC on the machine, but the benchmarks keep going up and down like crazy.
on the single cpu Athlon XP's it gave around 2GFlops/5.9GIops first time round and remain close to those numbers when rerunning them,
the Xeons also reported good scores, only the dual Athlon MP gives me these weird result.
I have seen fluctuating benchmarks before, both with standard and optimized BOINC clients, but never this far apart.
Ah, so it was you who started the benchmarks manually! I thought it was the problem you spoke about, and that the client ran the benchmarks unattended in a loop
Greatly varying benchmarks are very common with all versions I saw (not only mine). This was also one of the reasons to introduce the calibration. By using the calibration, the benchmarks are irrelevant, and you should avoid running them too often - each benchmarking requires then additional time and WU's to adjust the calibration.
I just looked up some old benchmark scores from the stock 5.2.13 and an optimized 5.3.2, but those were pretty consistent over several runs.
a fluctuation this big will require recalibration every 5 days when the automated benchmarks are run
Dorsai posted a strange thing concerning "suspend network activity" in 5.3.11.tx31 in the thread over on the SETI boards.
Dorsai píše:Re the network thing, very odd, but Boinc just popped up a message, "16/01/2006 18:43:11||Suspending network activity - user request"...Odd...I turn it off, it turns back on, then a few mins later turns it'self back off..Odd.. :-/
I have not been able to reproduce this behaviour, but this is what happens when I suspend network activity through the menu in BOINCManager and the select a project and click the update-button.
I can't reproduce Dorsai's exact issue.
when I suspend network activity, it stays suspended when a project is trying to upload, but when I force an update with the update-button network activity is resumed.
there must be some serious changes in 5.3.11 when compared to the recommended 5.2.13
ok, the dual athlon MP went crazy.
In addition to the extreme variations in benchmarks, it also started reporting extreme benchmark values to the SIMAP project
As I already explained several times, the bechmarks are completely irrelevant and you do not need to bother about it. Also, when I run 5 subsequent benchmarks with the same client (of any version and any author), I usually get 3-5 fold differences in the results, in the extremities. And I am definitely not alone - you can see it reported many times on the S@H board. Still, as I tell: it is completely irrelevant, and you can quietly ignore it.
As for the network connectivity - it is a known bug in the official version, but I assumed it was fixed in the official 5.3.11. Apparently not yet completely.
trux, I am having a serious problem with SIMAP here
look at this host and its results.
the benchmark values in the host properties matched the values in the client_state.xml last night, but seem to increase with every result returned.
CPU type AuthenticAMD
AMD Athlon(tm) XP 2800+
Number of CPUs 1
Measured floating point speed 2072.87 million ops/sec
Measured integer speed 6004.86 million ops/sec
As I already mentioned, it is quite well possible that at some projects it does not work well. I know nothing about SIMAP, but can imagine it could be the case if the project reports the estimated value too high (theoretical maximum), and the client, at practically all WU's, aborts the unit before completing, when it finds the final result or another stop condition prematurely. If no unit is completed, the client has no chance to estimate the full lenght, and starts considering the short aborted units for full ones, claiming for them the full credit. In such case only calibrating with a reference unit of a known value would help, but since I have such unit just for S@H, I am afraid it would not be easy. I might create some empiric table with additional coeffcients for such improperly behaving projects, but it may be rather difficult too, especially because I do not participate in too many projects and do not know the behaviour of all of them.
I'll probably add a table with some limiting values for individual projects later, but may need help of bigger amount of users participating at all those projects, to collect the necessary statistical data.
The second machine is bottoming out at about 50% claimed when compared to avergae granted, I'm pulling that one from SIMAP for now as soon as I get home.
The same machines are doing fine on LHC@home, even though that project has wildly varying result-runtimes (from 10 to 25.000 seconds).
I posted this issue on the SIMAP forums, asking if they were running anything special server-side.
On the server side we are running the standard daemons from the boinc distribution except on the validator.
For now the conclusion is that the estimated runtimes registered serverside are way off (8 hours opposed to 1 hour actual).
The project has corrected this and I will attempt another test tomorrow-evening when the results with the new estimated runtimes should be available.