Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
@M1sf3t Thanks, you're invited to the contest btw.
@Fast-Nop
@Root
@SparkyTD
@Demolishun
@netikras
@arcsector
@Ranchu
@Benedikt
@jdebs
@yung-stallman
@RedPolygon
@endor
@SortOfTested
@12bitfloat
@rutee07
@Lor-inc
@jeeper
@iamai
@sweetnothings
@PrivateGER
And anyone else I might have missed. -
@PrivateGER yes. It's not the bottleneck it was back then. Why?
edit: Challenge accepted. Factoring now. -
@Wisecrack Nothing in particular, when I was factoring prime numbers for an RSA crypto challenge, I found C# and BigInteger to be much much faster.
Might just have been a bad implementation. -
@PrivateGER
I went to take a bath while I waited.
Your factors are 3881828143*4017275099
total time: 545.7995755s
Any other takers? -
@PrivateGER I haven't tried anything besides 40 bits per prime (80 bit product). I woke up in the morning and it was still running.
I'm fairly certain it's an optimization issue, same way 64bit products were, same way 48 bit products were, etc.
Every time it hit a wall, I found a way of going around it.
I had to offer money because no one would post a number, people still think I'm playing them for suckers or something.
Yeah I'm a shitposter, but I never post shit without at least a little honest effort.
Anyone else would have folded from the amount of cringe, failure, and embarrassment and given up by now.
Not me. I've discovered in 200-300 ways *not* to factor large numbers and one that *does*. -
I've failed so many times I've now *automated* my failure. So instead of having to build the shit pile before digging through it, now I go directly to digging for little golden nuggets of corn in the shit pile, no extra effort necessary.
My job is just to sift through it, and find the graph traversals that lead to algorithms that work. -
@M1sf3t here https://devrant.com/rants/2365197/...
@PrivateGER I'll have to try it some time. As it is I'm comfortable in python and I don't want to mess up the feng shui of my code seeing as some of the optimizations I barely understand.
I wrote in the other thread a lot of the algorithm (Qsolver) comes out of another algorithm (Libra) that generated it. And I've been drunk and up for days at a time repeatedly while I was coding it so I'm not entirely sure about a lot of things.
The optimizations make it work, but I really don't know why it works on a theoretical level. -
endor57515y@Kashmir to be fair, python does allow for much quicker and easier implementation of an idea, generally speaking.
Although using something like numpy would definitely speed things up, since it's mostly C under the hood. -
@Kashmir I considered that but writing in a more optimized language with static typing doesn't work. It's early optimization when I'm still learning and changing the algorithm. And every time I have to think about low level abstractions it take me out of the "what does this mean" or "what are the implications of xyz" loop that I'm in when looking at the auto-barf from my graph algorithm and trying to draw connections or find relationships, which I just realized is really just a poorly done riff on floyd warshall, but I'm going off on a tangent.
Otherwise I would have written it in something like Julia.
But yes, in terms of raw speed, it'd probably be an order of magnitude faster in C or something similar. But thats big woop when something like that is negated simply by increasing key size by a few bits--which has been the problem with all my past attempts. -
@M1sf3t don't feel bad. I barely passed high school algebra, not because I couldn't understand, but because I was busy fighting constantly.
And legit, it was almost always other people starting shit. Such is life! -
@Kashmir When I can factor an 80 bit product in a reasonable amount of time and know it's repeatable, I'll write an article on the details.
I don't want a repeat of last time, exuberance and over-optimistic announcements.
@endor glad to have another! -
@sweetnothings I legit run an i3, average amount of ram (8gb), no gpu, not even one of those 'accelerated' 'fancy' built-in ones.
-
@sweetnothings also make sure you don't have any off-by-one errors. Had a guy send me a number and it turned out he got one digit wrong lol. Ended up factoring all night and it was still running in the morning.
-
endor57515y@Wisecrack LOL I'm a moron, I forgot to write down the two primes and I clicked 'Generate' a bunch of times to play with the thing.
Disregard the previous entry, here's a new one (that I actually wrote down BEFORE posting):
769,817,739,894,370,931
Though it would be quite fun if you found the two original primes for my previous entry anyway :D -
@endor I can and will. Challenge accepted on the first set.
Yeah that's part of the terms because there's always gonna be the one person who either messed up a digit or just up and made up a number. And then "woops! I guess your algorithm doesn't work!"
So the factors are proof when the deadline passes that the original number was in fact factorable.
So I got to protect myself! :P -
Sweetnothings and endor got me good with some long running ones so I'm running em through the older version of the algorithm thats optimized. The optimized version is nice and fast but it misses on some numbers I think, where the optimized version almost never does.
For anyone still waiting I'm basically trying to do things two at a time. But I will get to you. -
Ah, misunderstood you the first time. Well, here you go with a 64 bit *product*
949834333893892129 -
@endor For your first challenge where you forgot the factors, here they are:
14506597292809713019 = 3595425151 * 4034737669
Solved within fractions of a second. ^^ -
I'd like to congratulate sweetnothings.
Your numbers have failed to be factored.
Please post your factors and if possible, the most convenient method of payment. -
@Wisecrack I have a black belt in Google-Fu and found this one: https://alpertron.com.ar/ECM.HTM/
-
@sweetnothings if you're sure, then it shall be done.
Thanks for supporting the site. -
Endor I still have about half an hour on your number.
This is who is left so far.
Kashmir: 1,348,701,095,762,442,677
guitargirl15: 792,006,053,520,142,529
netikras: 11,097,858,618,493,805,783
redpolygon: 949834333893892129
fastnop: 11999998919000014049
1139999713300017673 -
@Wisecrack Btw, with the first challenge from @PrivateGER (15594371537471311157 = 3881828143 * 4017275099) that you solved within 545 seconds, I get the same result already after 45 seconds just by brute-force, i.e. trying out every odd number until sqrt(N).
And that's on a 10 years old Phenom-2 CPU. Of course using C and a 64 bit compile. I didn't even bother to do it multi-threaded, but that would be trivial with some pthread hackery. -
@Fast-Nop lol you just like to rain on the parade don't you?
But seriously it's not the point. The point is not to bruteforce anything. The point is to test the algorithm and pass or fail hilariously.
A timeline of events and attempts goes something like:
24 bits (6 months ago): failed often, terribly slow, large numbers are impractical
32 bits (4 months ago): almost always failed, took hours, anything larger was impossible.
36-40 bits: attempts here semiworked, took 15-20 minutes on average, everything larger never appeared to succeed
40-48 bits: (last month) worked predictably enough, took 5-10 minutes.
48--56 bits (last two weeks): various algols successfully returning results.
56-64 bits (current): usually succeeds, time is typically predictable though there are a few outliers.
I have a few approaches for this particular algorithm in order to make 64 bits reliable and fast and a few ideas about how to extend it to 72 bits over the next couple of weeks. -
@Wisecrack It's just that the algorithm has to be better than brute force, or else it is rather pointless. That's why the brute force baseline is useful.
That is, unless you are using some extremely slow language of course. -
@Fast-Nop of course.
The way it's gone so far, with each additional progression, the previous highest numbers that I tried became fast and reliable, and thats my thinking.
This algorithm started out not being able to factor 32 bit products without shitting the bed.
Then I inched it over and over up to 48 bits.
And then 56 bits. And now almost 64 bits, optimizing and making new additions as I went.
Two weeks down the road 64 bit will be as fast and reliable as 32 bits was.
Who knows, maybe in another two weeks after that, or a month, 128 bit will be what I'm working on. -
I'd like to congratulate @endor for winning $5.
Post your factors to confirm and your preferred payment method.
Starting Kashmir's number next: 1,348,701,095,762,442,677 -
@sweetnothings it's only for those invited and people who posted in the original thread before the contest was posted so I'm not too worried.
-
@Wisecrack Ok, new benchmark for "fast": If I try numbers 2, 3 and 5 manually, and then only divide by numbers that are not divisible by 2, 3 or 5, then I get from 45 seconds down to 23 seconds. Only taking numbers not divisible by 2 or 3 is 28 seconds.
Since this is easy to do in parallel, the multi-thread speedup would scale linearly. -
@Fast-Nop holy shit, your right! QUICK patent it.
Also I'm badly nostalgic now for the phenom. Thanks alot. Was inmy second computer. My first was an old windows machine in the 90s. Played an old wireframe game, something like battlezone I think. Played it was too much. -
@AvyChanna Just for the lulz, I made my brute-forcer multi-threaded. Your number takes 3.26 seconds at 6 threads (single thread: 12.8 seconds). ^^
-
Avyy7525yModulus = 9228188001849153749
n = 64 bit
p,q = 32 bit each
Common factoring tests (single thread on i5-7200U@2.5GHz, no GPU, python3+gmpy2, 1 iteration only)
===================
Pollard p-1(bounds=100,1000) = 0.44 sec
Williams p+1(single stage variant) = 0.30 sec
Pollard-Rho-Brent = 0.02 sec
Optimized Brute Force = 1.9 sec
Multiple Polynomial Quadratic Sieve = 0.14 sec
ECM(Lenstra) = 0.075 sec
Fermat = Don't even try
Also took some time to read your original thread. If you really could factor 1024 bit keys, this is BIG(even if it is a special-purpose algo instead of a general one)
Also, can you prove your algo doesn't provide false-positives?(wrong factors for some composites) -
Avyy7525y@Wisecrack Another thing that has come to my notice is that the asecuritysite.com generator is faulty.
For input=32, it gave me-
p=788,963,267
q=2,431,558,777
n=1,918,410,556,604,444,459
which are 30,32,61 bit respectively.(not 32,32,64 as expected)
smallest 32 bit prime = 2,147,483,659
largest 32 bit prime = 4,294,967,291 -
endor57515y@AvyChanna I'm guessing it doesn't mean *strictly* 32-bit primes, but rather *up to* 32-bit primes?
-
endor57515y@Wisecrack oh dang! Was that with my first entry or my second one? Because I didn't write down the factors on the first one, that's why I told you to ignore it 🤣
-
endor57515yAlthough @Fast-Nop 's link finds two primes that look just about right...
Tell you what: if you fail to factor my second entry too, I'll claim the prize. If not, you can keep your money. -
Avyy7525y@AlmondSauce He claims that he has a new algorithm to factor numbers and we are the testers
-
@AvyChanna But with the best will in the world, if it takes around 30-60 minutes on modern hardware to find the prime factors of a number formed by multiplying 2 32 bit primes together, it's surely orders of magnitude slower than algorithms like Pollard Rho?
As a quick test, I've dug out a quick 'n' dirty Java implementation of Pollard Rho I had lying around, and with a bunch of random products I've just tried, I'm consistently getting runtimes of less than half a second. Heck, if I up the anti and multiple 2 50 bit primes together, it can factor that in less than a minute.
This is on rather ancient hardware as well.
What am I missing here? -
Avyy7525y@AlmondSauce Yes. Absolutely Correct. But this is not pollard rho. No primality tests, no sieving. Completely new algorithm from scratch. It can (supposedly) factor (some of) 1024 keys in minutes, and as a POC, he is betting $5 for 32 bit keys. Till now we have atleast one 64 bit key which was not factored. Ofcourse I can use better algorithms to factor 200 digit modulus in under a minute, but where is the fun in that?
-
@AlmondSauce Even for 64 bit products, the algo is, so far, much slower than just brute-forcing.
I got my multi-threaded factorisation of 15594371537471311157 (the one with the lost factors ^^) now down to 3.85 seconds at 6 worker threads, after adding an early exit mechanism for the other worker threads if one worker thread has found a factor.
Some upper limit is the largest 64 bit prime 18446744073709551557 which takes 5 seconds to be determined as prime.
As long as the new algorithm performs worse than brute force, I could only see value in it if scaled better than sqrt(N).
Btw., my code is here: https://pastebin.com/iM1xbhCs -
@Wisecrack I don't know if you invited me because you thought that I didn't belive you. I just think that what you are working on is wildly interesting and I just wanted to see the code or know the method how you do it. I also think that if there was a git repo the whole crypto/cipher community would appreciate it.
I just wanted to clear the air between us. -
@AvyChanna @Fast-Nop I guess I don't understand the worth of what we're doing here. If it's an exciting new algorithm and it really needs to be tested on these numbers, heck, just generate a bunch automatically and keep throwing them at it?
Sorry to be "that guy", I'm just a bit baffled. It feels similar to someone saying "hey, I've got a new algorithm for finding prime numbers, I want to try prime numbers under 100 first, give me a number and I'll tell you if it's prime in less than an hour! You get $5 if I'm wrong!"
It's just all a bit... weird. Now if we were throwing 1024 bit RSA keys at this or something and getting the results back in a few hours, *that* would be bloody exciting. But I've yet to see any evidence that's the case. -
@AlmondSauce Dunno, I just wanted to get a prime number pair with some additional nasty properties for factorisation, so I wrote a checker for that.
While I was at it, it evolved into factorisation, and then I wanted to do it multi-threaded, and the speed-up is linear, and it's just some casual coding. -
jdebs3385y@AlmondSauce Does it matter what the "worth" is? The way I see it, OP is just having fun by working on a problem with their own solution and sharing with devrant. The $5 incentive is to get people to play along. That kind of thing happens in other research fields all the time.
@Wisecrack I'll refrain from posting for now since you look like you have plenty of participants! -
I had to sleep at the time. I'm back on it
@AvyChanna "could you prove it doesn't provide false positives", this is a really excellent question. For all products? Probably not. Thats outside my skill level (high school math) at a glance.
I checked, you're right. I set Kashmir's number to factor while
I slept and it returned in 2757.4851032s
p = 1348701095762442677 (837425339 * 1610532943)
but I didn't post it, so congratulations Kashmir, you on $5.
Really excellent feedback, thanks.
@endor I think if you have python installed you can do len(bin(p)[2:])
to get bit length but I don't know if thats accurate up to 64 bits.
I think python numbers use 53 bits for the integer and the rest
for the exponent and mantissa if I'm not mistaken?
It was with your first one endor. I'm actually really grateful, because it was among the first and caught me by surprise. -
@AlmondSauce because offering money was the only way to get anyone to participate lol.
It is orders of magnitude slower than rho apparently, but I'm not concerned. It was the same way with lower bit lengths week and months ago. The improvements I've planned out at each stage has pushed
the point where the algorithm becomes intractable to further and
further bit lengths. And now I'm testing it in the 32 bit key space.
Which isn't big for those other methods or algorithms including
bruteforce, but again, I'm not using additional threads, an optimized
or statically typed language, or even a beefy cpu.
I'm aware of the slowness. The intent is to get critical
feedback and just to get people to test it. Fast Nop and AvyChanna
have provided good insights. Another user that taught me a lot
about logarithms is Lor-Inc.
@Fast-Nop, stop, I can only get so hard!
"I could only see value in it if scaled better than sqrt(N)."
Well, thats an okay measure. We'll see. -
@yung-stallman I invited you just because we talked at one point and I like to have some skeptics present and thats who you struck me as.
-
endor57515y@Wisecrack only floating point numbers use the mantissa-exponent format (with 64-bit double-precision using 52 bits for the mantissa, 11 for the exponent, and 1 for the sign).
Integers don't use that fornat, and iirc python in particular does some weird blackmagic fuckery that allows you to work with integers much bigger than the 64-bit limit and hides everything under the hood (but I don't know what it does specifically).
My first product was precisely 64 bits, while my second entry is only 60 bits. -
@AvyChanna first bits can be zeroes, even though you have 32 positions. So I guess @endor is correct
-
@netikras Well I do, and I want to have fun, too, that's why I wrote the brute-force checker.
When implementing the early abort for workers once one of them has found a factor, I also learnt that pthread_kill looks so nice here - and is dead wrong. -
@Fast-Nop you know you write some of the cleanest c I've ever seen? It looks like fucking Mr. Clean and his shiny head decided to become a programmer.
What exactly was the issue with pthread_kill for those of us unfamiliar with c? -
@Wisecrack Ohhh thanks! :-) Yeah otherwise, C code becomes unmaintainable quickly.
pthread_kill doesn't actually kill, it rather sends a signal, but that can be a termination signal. If that isn't handled in a signal handler, then the whole process can kill itself.
So you'd have to install a signal handler for each threat, but then you have a race condition because a thread might get killed before it can install the handler, so that would require some synchronisation logic.
On top of that, the handler would basically just set some flag that the thread would check regularly anyway. -
@Fast-Nop I would have just done it in go and shot my foot with a bbgun instead of a shotgun.
How do you solve the race condition? -
@Wisecrack I just don't use signals, only that flag that the signal handler would set in each thread anyway, and set that from the main thread for all workers to read.
While such a volatile variable alone usually isn't good for synchronisation, it works here because the actual synchronisation after ending happens with the pthread_join which waits until the threads have really finished. -
Avyy7525y@Wisecrack @endor You can directly use num.bit_length() in python
@Fast-Nop Last time I wrote C, pthread_cancel was my goto for early exit -
@AvyChanna Yeah that works also, but the cleanest way then is that threads should have cancellation points, i.e. basically checking whether they would want to be cancelled, and that's both simpler and faster with an abort flag.
The nice thing is that pthread_cancel can go to a defined thread while a global abort flag doesn't, but here it's about cancelling all remaining workers anyway. -
I took a day to try and improve the algorithm and have some leads but as for now Fast-Nop's got me beat.
Rather than factor half the numbers and then spend five hours each on the other half only to have them fail to return I've decided to concede defeat for now.
In lieu of continuing I'll be taking a vote for a charity or organization to donate $25 to. The most suggested one gets the donation.
Thank you to all you participated.
Edit: A brief post-mortem: I think I'm encountering floating point errors at high bits lengths due to some unhandled intermediary conversions between decimal and pythons built-in math operations like floor().
I also want to take some time to explore further relationships between variables and series that appear in the algorithm to see where the bottleneck is.
Thanks again to everyone for their input. -
@Wisecrack My products n=p*q are chosen carefully so that:
1) p-q > 2*n^0.25 (preventing Fermat factorisation), and
2) both p-1 and q-1 contain also large prime factors (preventing Pollard's p-1 algorithm).
These conditions are described in https://en.wikipedia.org/wiki/... section "Faulty Keys". That's also why I wrote my own primality checker in the first place so that I could use strong prime factors, relative to 64 bit product length.
I think the $25 should be donated to devRant. -
@Fast-Nop It's actually $30 now. There goes my hennessy money for the week!
I'm gonna post my god-awful source code, so don't shoot me after looking at it, you were warned.
The basic idea I explained elsewhere. The optimization is that for every product of two primes, in addition to there being a relationship to the unit+mantissa of √p, there are variables Q, q, m, and n, that when iterated produce a series that contain the factors of p to some approximation.
And if you know m or n, retrieving the factors is trivial.
Q is defined as
(√p/(sublimate(√p)+n))*√p
and
q = √p/((sublimate(√p))+n)
And r = Q/(q-m), where if n and m are 'correct' the series produced should always contain one of the factors of p.
A good starting value for n, which I found regularly speeds up the process is
(√p/((√p**(√p^(1/ln(p))))/√p))-sublimate(√p)
Which admittedly is a little hard to follow, but makes more sense in the source which I'll post here in a bit. -
Now the whole thing is more sensitive to changes in the value of n than m, but n is 'promiscuous' in that there are many values of n that work, which isn't the case for m.
Interestingly, if you already know the factors, and you look for 'pseudofactors' (numbers close to a factor for some definition of 'close'), they almost always appear within a range *very close* to a power of ten in difference. I.e. a value in the sequence will be -997 off from the actual factor, or -9934, etc. In fact I never saw a pseudofactor that didn't, which is something I'm exploring.
Heres the code. Be warned, it might as well be written in pidgin.
https://pastebin.com/bK7Dw73G
The series search is actually faster than looking for the relationship of Q and q itself, so thats at the bottom.
I'm focused on fast ways to find n and m, because if you have them or can closely aproximate them then the factor bit length can be arbitrarily large and you'll still get a fast answer. -
I recommend
n: -2
m: 0
cutoff: -1
and type 'n' for whether you want to search for pseudofactors.
Or simply avoid the source like a plague. -
@Wisecrack Well, I don't understand the code, but that's because I don't understand the math behind it either. Guess I'm better at making dumb things really fast. ^^
-
@Fast-Nop I was actually hoping you (or pretty much anyone) would have some sort of insight into maybe how to derive n or m because you grasp the factorization problem better.
Some things from wolfram and other second glances:
Suppose we start with p=881660893 (17659*49927)
Running it through with the default input,
the final values when the factors are found, are n=40, m=473 (t is just to count total iterations).
n*p*(m+q))/Q = 1997015.8515~
1997015.8515/b = 39.9987
(our final n less a rounding issue)
likewise
(a*p*(m+q))/Q = 881632573.0692~
p-881632573.0692 = 29692.77509
Suggesting that a good start
for finding Q is to simply
do (p-√p). Sometimes I use w as a variable for √p. In any case..
for finding m
(Q/a)-q = m = 473.0374916946183846~
and for our final value, what was m?
473 in our example number. -
But interestingly, there are *many* values of n
that lead to a or b, suggesting there are OTHER formulas, and undiscovered variables, for every product describing its factors.
Some others based on our current example:
(b*Q)/(m+q) = 881660892.99999999 (or p)
And again p=881660893
and
b*Q = p*(m+q)
while
q = (a*m)/(-(a-√(a*b)))
Q = (a*m*w)/(w-a)
b = ((p*(m+q))/Q)
Also, again, Q/q = √p, and Q>q.
Its sort of like if the factors of p, when divided
gave the root of p, instead of some smaller ratio.
Or how the average of the factors of a number will be a value such that x + y = b, and x - y = a. -
I use extra parenthesis because I don't want my rampant alcoholism and vicious dyslexia fucking up the values when I otherwise accidentally invert one or two operations or slip up on the order.
-
Also people keep telling me I can't use floats with modular arithmetic but I'm too ignorant to care about those rules.
Treating it more like division with a remainder because I haven't decided if it has any utility as an idea.
I try all sorts of things that others wouldn't because I don't know any better. -
@Fast-Nop No I'm just embarrassingly bad at commenting my code, which is the real reason I don't like sharing.
But it's fun to pretend.
Edit:
Also, it boggles my mind how a *probabilistic* approach almost entirely eliminates many potential factors (wrong answers). Or did I misunderstand the article you posted?
Why on gods green earth does that work the way it does? It *shouldn't* work. -
@Wisecrack It seems that choosing the prime factors this way defeats certain factorisation approaches. So I just chose them this way in case your algo might have something to do with those algos, even if by accident.
-
Avyy7525yI looked at your code and Oh boy, it was a nightmare. Maybe python is not your strong hand. Now, lets see what we've got.
Let n = p*q
Without loss of generality, let p < q.
(If p==q, then n=p**2, factors are trivial)
By definition, p < sqrt(n) < q
Now, if you could come up with a nice rational approximation of sqrt(n), we can estimate p or q.
In your terminology, sqrt(n) = Q/q' -
Avyy7525yNow, try to estimate Q/(q'±m) for some m, such that it divides our modulus `n`. For +m, it is value of p, for -m it is value of q.
Correct me if I am wrong. Also, please explain what is pseudo-factor and cutoff? And why are these specific value of n and m selected(any theory behind it or just experimentation)?
Your algo reminds me of continued fractions. Maybe you can estimate Q/q' using convergents of continued_fraction(sqrt(p))?
Example implementation here - https://pastebin.com/WYx0FHrr
AFAIK, this won't solve factorization problem( don't let me stop you, atleast we learnt some nice things) -
@AvyChanna Well the sqrt(n) part is trivial, that's a library function. Since the precision is finite, it can only return rational numbers. That gives us a very good Q and q'.
But that just shuffles the problem around unless the effort for finding m is less then O(sqrt(n)), which is the brute force baseline. -
Avyy7525y@Fast-Nop Values of Q and q determine possible range of m. Small values yield small possible range for m, but may not give any factor. I was thinking, what if we could lower the precision just enough to reduce possible values of m but still have a solution.
Yes. it may still be slower than brute force -
@AvyChanna
p > q
√p > q
Q > √p
Q > q
Are you saying it could work with q > p as long as it still results in √p?
To clarify p is the product.
n is just used as an ordinary variable in this instance.
Maybe I'm misunderstanding.
"values of Q and q determine range of m"
I can't believe I didn't see that.
If thats true then the problem reduces to finding n.
A pseudofactor is any number 'close' to an actual factor of p.
Cutoff is just a sanity check because again, the range of n contains multiple working values.
I'm unfamiliar with convergents btw.
"Values of Q and q determine possible range of m. "
On the contrary, if you take any product that does happen to successfully return, and
then run it through multiple times at an increasing value of n, you'll notice
as n increases the final value of m decreases (generally).
Edit: I just looked at your code for convergents approximating √p. Clever! I have some work to do. -
Actually I'm wondering if phi has anything to do with it.
Going with the current example, and it's probably a coincidence but..
if we set N = (n*p*(m+q)/Q)/b
(assuming we knew those variables)
then N = 39.99871515540~
If we follow a series..
√p/((√p/a)+0..n)
we see that
√p/((√p/a)+40) = 712.3738041~
√p/((√p/a)+41) = 695.683~
and of course what does q in this instance equal?
q = 694.181162393514~
And for the *actual* value of n?
√p/(sublimate(w))+(N-1)) = 710.79877~
Which isn't that far off of √p/((√p/a)+40)
But heres where it gets interesting.
If n = 40
and instead N = (n*p*(m+q)/Q)/b = 39.99871515540
then
(w/((w/a)+N)) - (w/Decimal((sublimate(w))+(n-1))) = 1.618847506653766925
See the value? Whats that? Why, it's phi of course. Or close to it..
1.618847506653766925~/phi = 1.000502781715049~
So thats another avenue to explore and might be, as I said, entirely coincidental. -
I got one half of the formula then the fucking determinant didnt hold for other products. -_-
For a lot of products (60ish%) it cut the time to calculate in half for the lower of two factors. But so what. Thats like climbing mount everest to get to the moon. useless useless useless.
found some too identities but it doesnt matter.
At this rate it's hopeless.
What am I even doing anymore.
Related Rants
-
BlueNutterfly14Our Adobe Illustrator teacher(he taught us a bit of Photoshop before), tasked us with entering a contest for a...
-
Wisecrack241So I cracked prime factorization. For real. I can factor a 1024 bit product in 11hours on an i3. No GPU acce...
-
provector17Ok so I've noticed we have a small setup "pissin contest" so my di... I mean setup is that big ;)
CONTEST - Win big $$$ straight from Wisecrack!
For all those who participated in my original "cracking prime factorization" thread (and several I decided to add just because), I'm offering a whopping $5 to anyone who posts a 64 bit *product* of two primes, which I cant factor. Partly this is a thank you for putting up with me.
FIVE WHOLE DOLLARS! In 1909 money thats $124 dollars! Imagine how many horse and buggy rides you could buy with that back then! Or blowjobs!
Probably not a lot!
But still.
So the contest rules are simple:
Go to
https://asecuritysite.com/encryptio...
Enter 32 for the number of bits per prime, and generate a 64 bit product.
Post it here to enter the contest.
Products must be 64 bits, and the result of just *two* prime numbers. Smaller or larger bit lengths for products won't be accepted at this time.
I'm expecting a few entries on this. Entries will generally be processed in the order of submission, but I reserve the right to wave this rule.
After an entry is accepted, I'll post "challenge accepted. Factoring now."
And from that point on I have no more than 5 hours to factor the number, (but results usually arrive in 30-60 minutes).
If I fail to factor your product in the specified time (from the moment I indicate I've begun factoring), congratulations, you just won $5.
Payment will be made via venmo or other method at my discretion.
One entry per user. Participants from the original thread only, as well as those explicitly mentioned.
Limitations: Factoring shall be be done
1. without *any* table lookup of primes or equivalent measures, 2. using anything greater than an i3, 3. without the aid of a gpu, 4. without multithreading. 5. without the use of more than one machine.
FINALLY:
To claim your prize, post the original factors of your product here, after the deadline has passed.
And then I'll arrange payment of the prize.
You MUST post the factors of your product after the deadline, to confirm your product and claim your prize.
random
contest
factorization
primes