New M4 batch - U-534 P1030680

Message boards : News : New M4 batch - U-534 P1030680

To post messages, you must log in.

1 · 2 · 3 · 4 · Next

AuthorMessage
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2424 - Posted: 18 Feb 2013, 13:41:18 UTC

The work generator will be restarted today, running a new M4 batch on VROL NMKA naval message.
At first there will be only a short test batch to check the server backend, after the tests the server will resume in auto mode with lots of work.

It's also possible that the server will go down for a couple of hours during the next days; as the system hard drive needs to be replaced.



M4 Project homepage
M4 Project wiki
ID: 2424 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Peciak
Avatar

Send message
Joined: 27 Aug 09
Posts: 9
Credit: 68,434,620
RAC: 365,303
Message 2425 - Posted: 19 Feb 2013, 14:09:31 UTC

Z wielką radością cała załoga Polish National Team wita projekt wśród "żywych".
Przystępujemy do liczenia.
ID: 2425 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 2 Sep 07
Posts: 25
Credit: 10,393,748
RAC: 292
Message 2428 - Posted: 20 Feb 2013, 17:55:57 UTC

Yes!
Dublin, CA
Team SETI.USA
ID: 2428 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 2 Sep 07
Posts: 25
Credit: 10,393,748
RAC: 292
Message 2437 - Posted: 26 Feb 2013, 4:49:45 UTC

Based on the current run-rate, what is the project duration of this batch? Just a rough estimate would be fine.

Thanks!
Dublin, CA
Team SETI.USA
ID: 2437 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Aurel

Send message
Joined: 26 Sep 12
Posts: 17
Credit: 864,286
RAC: 61,698
Message 2438 - Posted: 26 Feb 2013, 10:19:55 UTC

So, I see we have a lot of new workunits. More than 91 million wu´s had to been computed now, but why? For a few days, that was "only" 22 million wu´s.
ID: 2438 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2439 - Posted: 26 Feb 2013, 11:02:47 UTC - in response to Message 2438.  

There is no way to guess when the batch will end. I hope we won't have to go through all the workunits.

The # of workunits changed because at first I added only 1/4 of machine settings to the queue. There was a glitch which made adding certain settings impossible.

M4 Project homepage
M4 Project wiki
ID: 2439 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Aurel

Send message
Joined: 26 Sep 12
Posts: 17
Credit: 864,286
RAC: 61,698
Message 2448 - Posted: 27 Feb 2013, 18:50:21 UTC - in response to Message 2439.  

There is only one way to be ready: compute, compute and compute. ;)
ID: 2448 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Aurel

Send message
Joined: 26 Sep 12
Posts: 17
Credit: 864,286
RAC: 61,698
Message 2451 - Posted: 2 Mar 2013, 14:54:28 UTC

I see, the m4_vroln72_3 would be run, too.
In the server status we only see the m4_vroln72_1 tasks. Would be the status for m4_vroln72_3 come to?
ID: 2451 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2465 - Posted: 8 Mar 2013, 0:07:41 UTC - in response to Message 2451.  

Server status will be fixed soon, extracting the data actually stresses the server and I have to think about possible solution(s), probably the workunit info will be cached for 6-12 hours with additional info added.


A bit of progress info:

20 average restarts were done on m4_vroln72_1 ('naval' dictionaries) with a minimum of 4. This batch suffered a bit from a server bug, the workunit distribution was very chaotic at first and some of the combinations went as high as 1400+ restarts.

11 average restarts were done on m4_vroln73_3 ('u534' dictionaries) with a minimum of 5.


M4 Project homepage
M4 Project wiki
ID: 2465 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2467 - Posted: 10 Mar 2013, 21:26:17 UTC - in response to Message 2465.  
Last modified: 10 Mar 2013, 21:28:06 UTC

Stats from today:
m4_vroln72_3 - 16.5 avg restarts, minimum 10
m4_vroln72_1 - 20.6, minimum 5.

I tweaked the fetcher code a bit for even smoother workunit distribution, now it slightly boosts the priority of blocks which have 0 results in progress.

Btw, yesterday I upgraded the BOINC server code to the last revision due to possible security bugs. Badges are not displayed because my old code is incompatible, this will be fixed soon.
M4 Project homepage
M4 Project wiki
ID: 2467 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2472 - Posted: 13 Mar 2013, 10:20:15 UTC - in response to Message 2467.  

I added stats info to the server_status, now for both batches.
Please don't be scared by the huge amount of workunits listed there, that's only because I set the target # of results to 2000. This does not mean that all the workunits have to be processed.



M4 Project homepage
M4 Project wiki
ID: 2472 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2496 - Posted: 22 Mar 2013, 10:11:07 UTC - in response to Message 2472.  

Both batches are near 30 restarts (with some blocks lagging behind, as usual).

Based on my findings from test runs, I've upgraded the server with an option to temporarily boost the priority of group of workunits every time a result with score near (~0.95+) current top score is received (only works once for each machine setup, so duplicate results won't trigger this again).

That's because very often a partial decrypt is sitting somewhere around the top results and it's very hard to notice until there's a plaintext to compare it to.





M4 Project homepage
M4 Project wiki
ID: 2496 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
TBREAKER
Avatar

Send message
Joined: 26 Sep 11
Posts: 29
Credit: 0
RAC: 0
Message 2499 - Posted: 4 Apr 2013, 4:39:09 UTC
Last modified: 4 Apr 2013, 5:01:09 UTC

Cryptanalysis is a hard work. Don´t be impatient! Maybe, some people have no idea about the complexity of the work:

An example for a single CPU (Enigma M4 hillclimbing):

4·336·26·26·26·26 (Positions) ·26·26 (Rings) ·TIME (maybe 50ms) = 20759140147s = 658.2 years!

Now you can divide this time by the participating CPUs.

Success? No guarantee...

The hillclimbing algorithm has to proof several thousand plugs at every single ring/position.

In comparison, Brute Force would need 150738274937250 plug tests at every single ring/position!!! --> Not feasible...

@TJM: Can you tell us the time, your software needs, for a single hillclimb at one position/ring? Maybe you can calculate the average from several runs... In other words: What time needs the enigma@home project for the complete keyspace?

All the best

Michael
-=> Breaking German Navy Ciphers - The U534 Enigma messages <=-
ID: 2499 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2500 - Posted: 4 Apr 2013, 7:05:01 UTC - in response to Message 2499.  
Last modified: 6 Apr 2013, 10:56:32 UTC

1000-passes over a single key for 72-letters ciphertext take around 2 seconds (Q9450 @ default clock) or 1.4s when running optimized app (gcc 4.3.5, -march=core2 -mtune=core2).

A decent quad core machine, running compiler-optimized app can surely do at least 1 full walk over keyspace per year when using 4 cores 24/7. Modern i7-based CPUs are even faster, I'd say even 2 walks per year would be possible on top models.

That's a lot of time, especially considering the fact that short text usually requires lots of restarts. I'm currently looking into possible solutions for CUDA-accelerated app.


On average, when running 72 letters text, the project does full restart (a walk through entire M4 keyspace) in less than 24 hours.
This is split between two separate batches with different dictionaries assigned - the first set uses Stefan Krah's naval dictionary, which was used in M4 project. The second runs a set based on the decoded U-534 messages.

If you'd like to take a look at the server output data, let me know.
The server updates lots of info in realtime, when the results are returned - this includes current keyrange distribution, full result list sorted by score, current work queue and some additional stats/diagnostic info.
Unfortunately due to the massive number of workunits I can't make the 'live' scripts public, as it surely would kill the server.

EDIT: visualisation of project speed:


M4 Project homepage
M4 Project wiki
ID: 2500 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
TBREAKER
Avatar

Send message
Joined: 26 Sep 11
Posts: 29
Credit: 0
RAC: 0
Message 2501 - Posted: 4 Apr 2013, 17:39:02 UTC

Thank you very much for the information.

2 walks per year seems to be very fast on a single machine. I´m very impressed about the speed of the project (24h). I still have problems to understand what a "pass" is. 1000-passes means 1000 plug test? Or maybe 1000 restarts of the plug algorithm?

Nvidia´s CUDA is very interesting, but it is hard to parallelize a software, which is originally written for a single cpu system.

All the best

Michael
-=> Breaking German Navy Ciphers - The U534 Enigma messages <=-
ID: 2501 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2502 - Posted: 4 Apr 2013, 18:19:57 UTC - in response to Message 2501.  

A single "restart" is a walk over subset of machine keyrange, doing single hillclimb on each of possible wheel settings.

All the current workunits are doing one pass only, however it's possible to assign "n" passes to a workunit. Then the app iterates through all wheel settings and upon reaching end of given key range, it restarts from the begining, decreases "n" by 1 and does next pass, this is repeated until n=0.


M4 Project homepage
M4 Project wiki
ID: 2502 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2503 - Posted: 4 Apr 2013, 21:19:50 UTC - in response to Message 2502.  
Last modified: 4 Apr 2013, 21:21:49 UTC

One more thing: due to the fact that everything here runs in asynchronous mode and I have little to no control over workunits which are already in progress (in 'sent' state), I also added the second unbroken text to the queue.

This is because BOINC has no reliable mechanism for cancelling work that was already sent.
I guess Enigma@Home is the only project (or one of very few, the other one might be distributed.net wrapper) where the solution may be found at any time and the rest of workunits from a batch will not be needed anymore.

The work generator stops every time top score changes and the server waits for decision what to do next. In worst case it will either run dry for a while (until I remove the stop flag) or it'll send out some work units which are not needed anymore (usually a few hundreds).

The real problem sits there in the 'in progress' work units, because even if I kill them on the server, it does not guarantee they'll be killed on clients.
The client has to contact the server to notice that the workunit state has changed and most of the time there is massive number of work units in progress.
For example, at this moment there are nearly 110k work units on the clients, which translates to roughly 2,5 walks over M4 key range.

Running two texts in parallel surely slows things down by a factor of 2 (from single text point of view), but in case when one of the texts is broken, it'll save some CPU power (50% less work units to abort).
M4 Project homepage
M4 Project wiki
ID: 2503 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2511 - Posted: 12 Apr 2013, 19:32:39 UTC - in response to Message 2503.  

The current status for VROLN is 37+39 full keyspace walks.

During the short maintenance today (I was testing new UPS and it's software - and eventually I managed to shut down the server accidentally) I took a snapshot of top results, if anyone would like to take a look they are here:

http://www.enigmaathome.net/static/3278xxyvui/vroln721.txt
http://www.enigmaathome.net/static/3279x7v5d3/vroln723.txt
M4 Project homepage
M4 Project wiki
ID: 2511 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
TBREAKER
Avatar

Send message
Joined: 26 Sep 11
Posts: 29
Credit: 0
RAC: 0
Message 2512 - Posted: 12 Apr 2013, 20:37:11 UTC

Thank you for sharing the top results!

I had a look on them too...

All the best

Michael
-=> Breaking German Navy Ciphers - The U534 Enigma messages <=-
ID: 2512 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TJM
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 25 Aug 07
Posts: 843
Credit: 68,798,435
RAC: 376,131
Message 2513 - Posted: 13 Apr 2013, 12:25:48 UTC - in response to Message 2512.  
Last modified: 13 Apr 2013, 12:45:56 UTC

I noticed a very serious problem that affects at least some texts.
For example, P1030655 is unbreakable when using the "naval" dictionary.

On a single key it breaks after 180 retries (worst case) with a score of 1.47M.
However when running 'unknown key' it will never break, because 1.47M is lower than average output score for a 72 letters text, which is around 1.6M. Even if the result is found, it's overwritten by garbles with higher score. The highest scoring random outputs are around 1.8M.

The same text is broken after just 3 retries when using "u534" dictionary. The top score is around 1.85M with average score around 1.2M.

This shows that the good trigram dictionary is critical when attacking short texts, and it's not just single case -> https://docs.google.com/spreadsheet/ccc?key=0AhS-kPmFI4OxdGVwd3VSZHk3cUx4SkZoR2FFMzRWS1E#gid=0


TBREAKER - do you have any bigram/trigram (or trigram only) dictionary that could be used ?
M4 Project homepage
M4 Project wiki
ID: 2513 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · 3 · 4 · Next

Message boards : News : New M4 batch - U-534 P1030680




Copyright © 2017 TJM