45

POSTMORTEM

"4096 bit ~ 96 hours is what he said.

IDK why, but when he took the challenge, he posted that it'd take 36 hours"

As @cbsa wrote, and nitwhiz wrote "but the statement was that op's i3 did it in 11 hours. So there must be a result already, which can be verified?"

I added time because I was in the middle of a port involving ArbFloat so I could get arbitrary precision. I had a crude desmos graph doing projections on what I'd already factored in order to get an idea of how long it'd take to do larger
bit lengths

@p100sch speculated on the walked back time, and overstating the rig capabilities. Instead I spent a lot of time trying to get it 'just-so'.
Worse, because I had to resort to "Decimal" in python (and am currently experimenting with the same in Julia), both of which are immutable types, the GC was taking > 25% of the cpu time.

Performancewise, the numbers I cited in the actual thread, as of this time:
largest product factored was 32bit, 1855526741 * 2163967087, took 1116.111s in python.
Julia build used a slightly different method, & managed to factor a 27 bit number, 103147223 * 88789957 in 20.9s,
but this wasn't typical.

What surprised me was the variability. One bit length could take 100s or a couple thousand seconds even, and a product that was 1-2 bits longer could return a result in under a minute, sometimes in seconds.

This started cropping up, ironically, right after I posted the thread, whats a man to do?

So I started trying a bunch of things, some of which worked. Shameless as I am, I accepted the challenge. Things weren't perfect but it was going well enough. At that point I hadn't slept in 30~ hours so when I thought I had it I let it run and went to bed. 5 AM comes, I check the program. Still calculating, and way overshot. Fuuuuuuccc...
So here we are now and it's say to safe the worlds not gonna burn if I explain it seeing as it doesn't work, or at least only some of the time.

Others people, much smarter than me, mentioned it may be a means of finding more secure pairs, and maybe so, I'm not familiar enough to know.

For everyone that followed, commented, those who contributed, even the doubters who kept a sanity check on this without whom this would have been an even bigger embarassement, and the people with their pins and tactical dots, thanks.

So here it is.
A few assumptions first.
Assuming p = the product,
a = some prime,
b = another prime,
and r = a/b (where a is smaller than b)
w = 1/sqrt(p)
(also experimented with w = 1/sqrt(p)*2 but I kept overshooting my a very small margin)

x = a/p
y = b/p

1. for every two numbers, there is a ratio (r) that you can search for among the decimals, starting at 1.0, counting down. You can use this to find the original factors e.x. p*r=n, p/n=m (assuming the product has only two factors), instead of having to do a sieve.

2. You don't need the first number you find to be the precise value of a factor (we're doing floating point math), a large subset of decimal values for the value of a or b will naturally 'fall' into the value of a (or b) + some fractional number, which is lost. Some of you will object, "But if thats wrong, your result will be wrong!" but hear me out.

3. You round for the first factor 'found', and from there, you take the result and do p/a to get b. If 'a' is actually a factor of p, then mod(b, 1) == 0, and then naturally, a*b SHOULD equal p.
If not, you throw out both numbers, rinse and repeat.

Now I knew this this could be faster. Realized the finer the representation, the less important the fractional digits further right in the number were, it was just a matter of how much precision I could AFFORD to lose and still get an accurate result for r*p=a.

Fast forward, lot of experimentation, was hitting a lot of worst case time complexities, where the most significant digits had a bunch of zeroes in front of them so starting at 1.0 was a no go in many situations. Started looking and realized
I didn't NEED the ratio of a/b, I just needed the ratio of a to p.

Intuitively it made sense, but starting at 1.0 was blowing up the calculation time, and this made it so much worse.

I realized if I could start at r=1/sqrt(p) instead, and that because of certain properties, the fractional result of this, r, would ALWAYS be 1. close to one of the factors fractional value of n/p, and 2. it looked like it was guaranteed that r=1/sqrt(p) would ALWAYS be less than at least one of the primes, putting a bound on worst case.

The final result in executable pseudo code (python lol) looks something like the above variables plus

while w >= 0.0:
if (p / round(w*p)) % 1 == 0:
x = round(w*p)
y = p / round(w*p)
if x*y == p:
print("factors found!")
print(x)
print(y)
break
w = w + i

Still working but if anyone sees obvious problems I'd LOVE to hear about it.

Comments
  • 1
    Thanks again to everyone that commented, and participated.

    Also obligatory: First! (because I'm a dick)

    The lesson here is it's important learn about Big O notation, how to read it, and how to use it.
  • 14
    Well shit, i was a spy from nsa and was about to send a black jet near your location
  • 3
    Also, if theres obvious errors let me know. Can't do anything about it at the moment because the code posted is more than 5 minutes old and I have to go sand a bathroom.

    I hope you all enjoyed the sensationalism. I know I did.
  • 3
    @StopWastingTime

    *Ducks under coffee table*!

    1960s nuke safety training defeats your drone like paper beats rock!
  • 2
    It was a fun ride nonetheless :D
  • 1
    If anyone's aware of prior research on this approach I'd love to read it.

    Also whats the time complexity for this assuming it worked?

    If I was factoring an n bit product how long would it take?

    64 bit product: ???

    128 bit product: ???

    256 bit product: ???

    etc.
  • 1
    And of course devrant killed my python formatting.
  • 5
    Isn't floatingpoint precision a huge problem here? How do you work around that. Also there are infintite numbers between 0 and 1 that you can't Order to walk through, if you ex. Start at 1.0 you'll never reach 0.9 as there are infintite possibilities, which in Turn means you'll never reach ex. 0.89999999 or 0.1, etc. How do you deal with that?
  • 4
    A bit late to the party, but it was still a fun ride! Great job man! This is the kind of fun experiments that lead us to knowledge and discovery, regardless of the final outcome.
  • 1
    @Wack

    Ah zenos paradox, I love it!

    If you think about it, I think whats happening is we're sidestepping the problem.

    If every n can be defined by some infinite range of infinite precision values u-v, then by extension the sought after value (I think) should represent that range, ergo the fractional component of the first calculated prime should be discardable. Because of this it should follow then that if it is the TRUE factor we're looking for, then the derived second factor MUST 1. result in a*b=p *exactly*, and 2. b mod 1 MUST equal exactly 0
  • 4
    @Wisecrack so if I understand it correctly you're applying kind of an heuristic to find it? Is that any netter, than an randomized algorithm? Why not have a RNG generating values between 0 and 1 and then check them? This operation scales pretty well in a multithreaded (or better a distributed) system...

    The problem I still see, lies in machine precision. How do you get around that? By just using arbitrary precise numeric classes?
  • 3
    @Wack

    > How do you get around that? By just using arbitrary precise numeric classes?

    yup. We're literally fudging the precision because the precision doesn't matter *past* a certain fineness, as long as 1. we can get the exact product back out, and 2. the result of the mod 1 check of the second factor equals b.

    Hence why I wrote that the fractional component "falls into" the value of the first factor.
  • 1
    @Wack

    It can be calculated in parallel as you said and theres a straightforward way of doing that, but theres also a not-so-straightforward way thats MUCH faster.

    Basically in a multithreaded implementation we 'jiggle' 1. the precision, and 2. the precision of the increment we use.

    So for example,

    d=digits.

    getcontext().prec = d

    #(still experimenting with d vs. d+1)

    #and

    i = Decimal(0.1**(getcontext().prec-n))

    n can be any value between 1, to 1..N, but lots of numbers fail on values larger than 2.

    I've found almost ALL of the products I've encountered have some number where if N is sufficient, (for some value of sufficient), the result goes from taking 100s or 1000s of seconds, to taking less than a minutes or less, regardless of bit length (so far that I've seen).

    Moreover, this number tends to be rather forgiving.

    So I'm still experimenting.
  • 3
  • 2
    @magicMirror

    Is that like a tactical dot but tastier?
  • 2
    📌
  • 2
    @Wisecrack Oh yes it is. 😎

    Just an idea - assuming the amount of primes under a known N integer is finite, can't you create a rainbow table of said primes - and do a search using it before applying your approach?
  • 1
    @magicMirror for "just" two numbers that'll work fine. Think of a double linked list, with a pointers walking from the Hess and one from tail...
  • 2
    @magicMirror and @Wack

    I don't know even about the problem to say if that would work. I'm not a mathematician or cryptographer.

    This is probably out of my own ignorance, but at the risk of being wrong, I want to answer 'yes' only because it would be cool if you guys are correct.
  • 1
    @magicMirror I don't know about you, but I don't know any way to calculate primes efficiently enough that this would run fast for large-value numbers.
  • 0
    @Gophyr

    I'm not even testing primality. Just looking for factors, on the presumption what is being factored is the product of two primes.

    For everything else, things that aren't the product of just two factors, products that aren't the product of two primes, I don't know.
  • 0
    I was thinking what if I do prec-2 for i (the increment to use) and then calculate the pseudo-primes in question without the initial tests of modulus, calculate the new product (or 'pseudoproduct'), and then determine if the w is too high/too low based on if the pseudo-product is above or below it? and once I know, I can increase precision to refine w. essentially we start out VERY low precisiom, enough to get in the 'neighborhood' or 'ballpark' of the correct answer, and increase precision when we get close? That way, even as the bit length grows, the time it takes to solve should remain low.
  • 1
  • 2
    Hey, now you can rant about these results!!
  • 0
    I was thinking what I would do is define two new variables, 'u' and 'v', equal to x/w, and y/w, where x and y are unknown and then find the value for u or v instead, because it's much closer to 1.0, and so I can get away with less precision.

    Still working on it though.
  • 1
    @Wisecrack for the sake of humanity I hope you never get it working :O
  • 1
  • 1
    📌
  • 1
    Man you got me with this... I can't stop trying new ways to do this and I even found some ways that look promising, but they are completely different approaches than yours. I got my algorithm down to a very very basic problem which can easily be sped up and multithreaded and used by my GPU, sooo... well let's see how this goes.
    I don't expect too much though
  • 0
    @magicMirror

    I think Ramanujan has a formula to calculate the number of primes between 1 and N but I'm not good enough at math to tell you more.
  • 1
    @EaZyCode

    I'm dying. Made my fucking day bro.

    Thanks for the comment!
  • 1
    To calculate primes between 1 and N, just use the primesieve with a lazy executed language, you can't get mich better than that...
  • 1
    @Wisecrack well right now I got it down to 1.44 seconds for a 45 bit number, single threaded on my CPU. No idea if this is good or horrible... 100ms for 35 bits though
    The primes are 2 orders of magnitude apart (in decimal) in both cases.
    But compared to what I had in mind this morning, that's a massive improvement
  • 0
  • 0
    @yung-stallman

    Alright now you're just fucking with me.
  • 1
  • 1
    What an emotional rollercoaster. Be still my heart
  • 1
    @jdebs

    Better to be on a roller-coaster than not on one.
Add Comment