Ranter
Join devRant
Do all the things like
++ or  rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power realtime personalized recommendations and activity feeds using a simple API
Learn More
Comments

Wisecrack923776d@scor under rated comment.
If any officials actually saw it, they dismissed it as crackpotism and didn't actually check the math because they woulda seen what I saw and Id be in a hole or dead by now. 
scor383175dYerp, @Wisecrack.
Be it officials, inofficials, underworlders or wannabes..
Please get some trusted advice, least. 
Wisecrack923772d@scor no one listens to my craziness anymore, not the fun kind, nor the unfun kind.
only thing to do is complete the work. 
vane1111872dThere is still gonna be a big problem with big p and q if you want to use it for decryption without knowing private components.
I was wondering something else. There is now big buzz about AI that is basically guess what’s next word.
What I have in mind is if I train the transformer with p, q, e, n with billion of samples do I get correct p and q if I specify n and e.
That would be funny if 65 billion parameter model can break encryption in couple of seconds. 
Wisecrack923772d@vane "There is still gonna be a big problem with big p and q if you want to use it for decryption without knowing private components."
At least for now I can reliably put a tight estimate on the exponent of the 2's factor of each magnitude of N's potential factors p and q.
It's a nontrivial improvement in the search space. The ruleset is pretty big though which is the reason for the automation. 
Wisecrack923769d@vane speaking of p, q, e, and n:
I had some earlier success about six months ago training a network on the internal variables (private components of the test data( from the p/x algebra.
Couldn't go further than that because while I understand the basics of a lot of ML, I don't understand the implementation and practice all that well.
It was pretty promising but without a mentor I had to abandon that route. It was the 'unknown unknowns' territory.
The matrix decomposition route, based on what I've seen is firmly in the 'known unknowns' category of understanding.
I'm seeing real results and not shitposting out of ignorance or fun so much anymore. 
Wisecrack923769d@vane
It actually kinda scares me.
Like I spent my whole life, least of all the last couple of years making posts here and elsewhere, thinking "ok this is kinda stupid and I'm gonna get roasted for posting it and having my ignorance put on display but eh thats kinda fun anyway and I'll learn something from someone more experienced and educated, and I don't know much anyway", sort of selfput downs, but then I look at my complete lack of education, and see prior posts where I was able to make reasonable estimates of the dedekinds, and I kinda look at that and say "actually maybe I'm not totally retarded".
And then its like I hit the occasional moment where I'm like "but wait a minute, what if this is real? 
Wisecrack923769d@vane
And then I stumble on a new derivation of something like the silver ratio, or decomposition factorization, and I'm left floored, because my whole world view is if you're not educated, like college, then finding these sort of things doesn't happen. The world isn't good will hunting, reality doesn't work that way, and I'm definitely not as intelligent as him.
Like wtf is algreba? And why can't we do xyz? What rule prevents it? And if that rule weren't valid, at least under some circumstances, or special constraint, how would that be useful? And then I realize actually someone already thought of that, its what we have mathematical rings for, where the normal rules are modified or don't apply. And its a whole subset of mathematics. And here I am telling you something you probably already know.
But I'm not supposed to know that. I didn't go to college. I work for fucking pizza hut.
The entire world is a riddle, a grabbag of contradictions being continuously reconciled.
Related Rants
I didn't leave, I just got busy working 60 hour weeks in between studying.
I found a new method called matrix decomposition (not the known method of the same name).
Premise is that you break a semiprime down into its component numbers and magnitudes, lets say 697 for example. It becomes 600, 90, and 7.
Then you break each of those down into their prime factorizations (with exponents).
So you get something like
>>> decon(697)
offset: 3, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('2')]]
offset: 2, exp: [[Decimal('2'), Decimal('1')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('1')]]
offset: 1, exp: [[Decimal('7'), Decimal('1')]]
And it turns out that in larger numbers there are distinct patterns that act as maps at each offset (or magnitude) of the product, mapping to the respective magnitudes and digits of the factors.
For example I can pretty reliably predict from a product, where the '8's are in its factors.
Apparently theres a whole host of rules like this.
So what I've done is gone an started writing an interpreter with some pseudoassembly I defined. This has been ongoing for maybe a month, and I've had very little time to work on it in between at my job (which I'm about to be late for here if I don't start getting ready, lol).
Anyway, long and the short of it, the plan is to generate a large data set of primes and their products, and then write a rules engine to generate sets of my custom assembly language, and then fitness test and validate them, winnowing what doesn't work.
The end product should be a function that lets me map from the digits of a product to all the digits of its factors.
It technically already works, like I've printed out a ton of products and eyeballed patterns to derive custom rules, its just not the complete set yet. And instead of spending months or years doing that I'm just gonna finish the system to automatically derive them for me. The rules I found so far have tested out successfully every time, and whether or not the engine finds those will be the test case for if the broader system is viable, but everything looks legit.
I wouldn't have persued this except when I realized the production of semiprimes *must* be noneularian (long story), it occured to me that there must be rich internal representations mapping products to factors, that we were simply missing.
I'll go into more details in a later post, maybe not today, because I'm working till close tonight (won't be back till 3 am), but after 4 1/2 years the work is bearing fruit.
Also, its good to see you all again. I fucking missed you guys.
rant
where i've been
work
math