Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ??

Message boards : Number crunching : Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ??

To post messages, you must log in.

AuthorMessage
San-Fernando-Valley

Send message
Joined: 16 Jul 17
Posts: 3
Credit: 12,501,649
RAC: 3
Message 5892 - Posted: 18 Jul 2017, 4:50:40 UTC

Something has changed enormously between those two versions mentioned in the title.

Enigma cuda 1.08 (cuda_fermi) WUs used to take around 7 to 8 MINUTES with credits over 2000.
Now 1.09, since yesterday, run max. of 16 SECONDS giving around 90 credits.

I am STOPPING GPU/NVIDIA crunching until "problem" is explained to me.
"Not everything that can be counted counts, and not everything that counts can be counted"
- Albert Einstein (1879 - 1955)
ID: 5892 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 69,856,335
RAC: 386,206
Message 5897 - Posted: 18 Jul 2017, 9:47:02 UTC - in response to Message 5892.  
Last modified: 18 Jul 2017, 9:47:30 UTC

Shorter workunits are from different batch and there is nothing wrong with them, other that the fact they're too short for fast GPUs.
There are a couple of workunits lengths as it was the first release and I was testing various options.
M4 Project homepage
M4 Project wiki
ID: 5897 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
San-Fernando-Valley

Send message
Joined: 16 Jul 17
Posts: 3
Credit: 12,501,649
RAC: 3
Message 5898 - Posted: 18 Jul 2017, 13:38:45 UTC

OK - thanks for your quick response!

So, wenn are you finished testing?

I would like to start crunching "normal" WUs.

The short ones are "killing" my speed index.
I was up to over 34,000.000 and after unknowingly running the short WUs now under 2,800.000 !?

Have a nice day.
ID: 5898 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
San-Fernando-Valley

Send message
Joined: 16 Jul 17
Posts: 3
Credit: 12,501,649
RAC: 3
Message 5899 - Posted: 18 Jul 2017, 13:48:19 UTC

... back up to 29,000.000 and over after I started to crunch (shorties) again ...

It's like magic ...
"Not everything that can be counted counts, and not everything that counts can be counted"
- Albert Einstein (1879 - 1955)
ID: 5899 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 69,856,335
RAC: 386,206
Message 5900 - Posted: 18 Jul 2017, 14:06:54 UTC - in response to Message 5899.  

These WUs are real ones just shorter.
Also the workunit length is not constant, it fluctuates a bit and the runtime depends on initial machine settings.
M4 Project homepage
M4 Project wiki
ID: 5900 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
JugNut

Send message
Joined: 16 Mar 13
Posts: 24
Credit: 104,441,241
RAC: 554,644
Message 5901 - Posted: 18 Jul 2017, 15:48:33 UTC - in response to Message 5900.  
Last modified: 18 Jul 2017, 15:50:17 UTC

Yea besides the poorer credit for the G3's v's the G4's the main thing that annoys me is having the G4's & G3's mixed together. IMHO the G4's run better by themselves where the G3's run more efficiently 2 or 3 at a time. Since they both have the same plan class & both use the same app I can't configure my app_config to treat them differently. IE: to run G3's at 3 at a time & g4"s at 1 at a time. Perhaps if they came in batch's one after the other or were changed daily? Although I could easily imagine this may not be practical.

The long G4's give a decent amount more credit than the G3's do, maybe be try giving 120cr instead of 90 credit given now for smallies and maybe 1050cr instead of 900 for the larger G3's. This may not seem like much but when you doing thousands of WU's a day it quickly adds up.(or down as it is now)
This will at least bring both G4's & G3's closer to being in line with each other credit wise and stop people cherry picking & just moving off when they see the smallness start to flow through. Some people dislike the smallies anyway so giving them even more reason to dislike them by given them less credit is probably not a good idea. Of course this is your show and this is just my opinion.

Cheers
ID: 5901 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
europe64

Send message
Joined: 9 Dec 15
Posts: 2
Credit: 26,791,613
RAC: 20,198
Message 5902 - Posted: 18 Jul 2017, 16:18:21 UTC

hi,
my opinion is that credit should be given for computing power
ID: 5902 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 69,856,335
RAC: 386,206
Message 5905 - Posted: 18 Jul 2017, 20:08:59 UTC - in response to Message 5902.  
Last modified: 18 Jul 2017, 20:15:30 UTC

With all the new batches of work I'll be aiming for workunit length somewhere around the current g4* workunits or slightly longer.
I can't make them too long, as the app performance quickly scales down with GPU speed and the processing time would be painfully slow on mid range GPUs.
BOINC server has a mechanism that allows sending longer workunits to faster hosts and at some point I'll look into this.

In the meantime, I have disabled the g3_alqfi87_2 and g3_alqfi87_3 and replaced them with the same batches but with workunits 10x and 20x longer.
M4 Project homepage
M4 Project wiki
ID: 5905 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ??




Copyright © 2017 TJM