Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "more equal than others"
-
POSTMORTEM
"4096 bit ~ 96 hours is what he said.
IDK why, but when he took the challenge, he posted that it'd take 36 hours"
As @cbsa wrote, and nitwhiz wrote "but the statement was that op's i3 did it in 11 hours. So there must be a result already, which can be verified?"
I added time because I was in the middle of a port involving ArbFloat so I could get arbitrary precision. I had a crude desmos graph doing projections on what I'd already factored in order to get an idea of how long it'd take to do larger
bit lengths
@p100sch speculated on the walked back time, and overstating the rig capabilities. Instead I spent a lot of time trying to get it 'just-so'.
Worse, because I had to resort to "Decimal" in python (and am currently experimenting with the same in Julia), both of which are immutable types, the GC was taking > 25% of the cpu time.
Performancewise, the numbers I cited in the actual thread, as of this time:
largest product factored was 32bit, 1855526741 * 2163967087, took 1116.111s in python.
Julia build used a slightly different method, & managed to factor a 27 bit number, 103147223 * 88789957 in 20.9s,
but this wasn't typical.
What surprised me was the variability. One bit length could take 100s or a couple thousand seconds even, and a product that was 1-2 bits longer could return a result in under a minute, sometimes in seconds.
This started cropping up, ironically, right after I posted the thread, whats a man to do?
So I started trying a bunch of things, some of which worked. Shameless as I am, I accepted the challenge. Things weren't perfect but it was going well enough. At that point I hadn't slept in 30~ hours so when I thought I had it I let it run and went to bed. 5 AM comes, I check the program. Still calculating, and way overshot. Fuuuuuuccc...
So here we are now and it's say to safe the worlds not gonna burn if I explain it seeing as it doesn't work, or at least only some of the time.
Others people, much smarter than me, mentioned it may be a means of finding more secure pairs, and maybe so, I'm not familiar enough to know.
For everyone that followed, commented, those who contributed, even the doubters who kept a sanity check on this without whom this would have been an even bigger embarassement, and the people with their pins and tactical dots, thanks.
So here it is.
A few assumptions first.
Assuming p = the product,
a = some prime,
b = another prime,
and r = a/b (where a is smaller than b)
w = 1/sqrt(p)
(also experimented with w = 1/sqrt(p)*2 but I kept overshooting my a very small margin)
x = a/p
y = b/p
1. for every two numbers, there is a ratio (r) that you can search for among the decimals, starting at 1.0, counting down. You can use this to find the original factors e.x. p*r=n, p/n=m (assuming the product has only two factors), instead of having to do a sieve.
2. You don't need the first number you find to be the precise value of a factor (we're doing floating point math), a large subset of decimal values for the value of a or b will naturally 'fall' into the value of a (or b) + some fractional number, which is lost. Some of you will object, "But if thats wrong, your result will be wrong!" but hear me out.
3. You round for the first factor 'found', and from there, you take the result and do p/a to get b. If 'a' is actually a factor of p, then mod(b, 1) == 0, and then naturally, a*b SHOULD equal p.
If not, you throw out both numbers, rinse and repeat.
Now I knew this this could be faster. Realized the finer the representation, the less important the fractional digits further right in the number were, it was just a matter of how much precision I could AFFORD to lose and still get an accurate result for r*p=a.
Fast forward, lot of experimentation, was hitting a lot of worst case time complexities, where the most significant digits had a bunch of zeroes in front of them so starting at 1.0 was a no go in many situations. Started looking and realized
I didn't NEED the ratio of a/b, I just needed the ratio of a to p.
Intuitively it made sense, but starting at 1.0 was blowing up the calculation time, and this made it so much worse.
I realized if I could start at r=1/sqrt(p) instead, and that because of certain properties, the fractional result of this, r, would ALWAYS be 1. close to one of the factors fractional value of n/p, and 2. it looked like it was guaranteed that r=1/sqrt(p) would ALWAYS be less than at least one of the primes, putting a bound on worst case.
The final result in executable pseudo code (python lol) looks something like the above variables plus
while w >= 0.0:
if (p / round(w*p)) % 1 == 0:
x = round(w*p)
y = p / round(w*p)
if x*y == p:
print("factors found!")
print(x)
print(y)
break
w = w + i
Still working but if anyone sees obvious problems I'd LOVE to hear about it.36 -
!dev but definitely rant
Here's a fucking thought:
How is holding women over different standards at events and (non-physical) competitions (hackathons especially, somehow) NOT widely considered sexist? I don't even mean towards men - yes, of course.
But also towards women: By preferring their results in some competitions in order to "support them", you implicitly degrade them to be small children in need for praise. You imply that you expect them to perform worse. By "women-first" PR bullshit, you do what you claim to be against. Fuck you.
Why can't we just hold everyone to the same fucking standards? Women can be just as good in tech as men, when interested. I would even make a point that these different standards hold back women from trying to get into any tech-related career.17 -
math be like:
"Addition (often signified by the plus symbol "+") is one of the four basic operations of arithmetic; the others are subtraction, multiplication and division. The addition of two whole numbers is the total amount of those values combined. For example, in the adjacent picture, there is a combination of three apples and two apples together, making a total of five apples. This observation is equivalent to the mathematical expression "3 + 2 = 5" i.e., "3 add 2 is equal to 5".
Besides counting items, addition can also be defined on other types of numbers, such as integers, real numbers and complex numbers. This is part of arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can be performed on abstract objects such as vectors and matrices.
Addition has several important properties. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication.
Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day."
And you think like .. easy, but then you turn the page:15 -
So while exploring some new ideas, I decided to figure out if I could use variables in the known set to determine the bounds of variables in the unknown set.
The variables in question are algebraic identities derived from the semiprimes, so you already know where this is going.
The existing known set is 1194 identities.
And there are, if I recall, roughly two dozen unknowns.
Many knowns have the unknowns as their factors. The d4 product set for example is composed of variables d4a, d4u, d4z, d4z9, d4z4, d4alpha, d4theta, d4omega, etc.
The component variables themselves are unknown, just their products are known. Anyway.
What I've found interesting is if you know the minimum of some of these subsets, for example d4z is smallest out of the d4's for some semiprimes, then you know the upperbound of both the component variables d4 and z.
Unless of course either of them is < 1.
So the order of these variables, based on value, changes depending on the properties of the semiprime, which I won't get into. Most of the time the order change is minor, but for some variables they can vary a lot between semiprimes, rapidly shifting their rank in the known set. This makes it hard to do anything with them.
And what I found myself asking, over and over again, was if there was a way to lock them down? Think of it like a giant switch board, where flipping one switch lights up N number of others, apparently at random. But flipping some other switch completely alters how that first switch works and what lights it seemingly interacts with. And you have a board of them thats 1194^2 in total. So what do you do?
I'd had a similar notion a while back, where I would measure relative value in the known set, among a bunch of variables, assign a letter if the conditions were present, and generate a string, called a "haplotype."
It was hap hazard and I wrote a lot of code to do filtering, sorting, and set manipulation to find sets of elements in common, unique elements, etc. But the 'type' strings, a jumble of random letters, were only useful say, forty percent of the time. For example if a semiprime had a particular type starting with a certain series of letters, 40% of the time a certain known variable was guaranteed to be above a certain variable from the unknown set...40%~ of the time.
It was a lost cause it seemed.
But I returned to the idea recently and revamped the entire notion.
Instead what I would approach it from a more complete angle.
I'd take two known variables J and K, one would be called the indicator, and the other would be the 'target'.
Two other variables would be the 'component' variables (an element taken from the unknown set), and the constraint variable (could be from either the known or unknown set).
The idea was that relationships between the KNOWN variables (an indicator and a target variable) could be used to indicate the rank relationship between the unknown component variable and the constraint variable.
You'd think this wouldn't work either, but my intuition was there were so many seemingly 'random' rank changes of variables in the known set for any two semiprimes, that 1. no two semiprimes ever shared the same order for every variable, and 2. the order of the known variables had to be leaking information about the relationships of the unknown variables.
It turns out my intuition was correct.
Imagine you are picking a lock, and by knowing the order and position of the first two pins, you are able to deduce the relative position of two pins further back that you can't reach because of the locks security features. It doesn't let you unlock the lock directly, but by knowing this, if you can get past the lock's security features, you have a chance of using information about the third pin to get a better, if incomplete, understanding about the boundary position of the last pin.
I would initiate a big scoring list, one for each known element or identity. And then I would check it in tandem like so:
if component > constraint and indicator > target:
indicator[j]+= 1
This is a simplication, but the idea was to score ALL such combination of relationship, whether the indicator was greater than the target at the same time a component was greater than a constraint, or the opposite.
This worked out to four if checks and four separate score lists.
And by subtracting one scorelist from another, I could check for variables that were a bad fit: they'd have equal probability of scoring for example, where they were greater than the target one time, and then lesser than it for another semiprime.
So for any given relationship, greater or lesser between any unknown variable and constraint variable, I could find any indicator variable and target variable whose relationship strongly correlated to the unknown's.18 -
I feel so lost all the time Everytime I think about the future. How are you all going forward?
- What should i be doing ? I used to like computer science when it was taught with lots of simplification and abstraction (in the school level). Now i know there are a 100+ research areas/work areas/branches in it, and i am an average in all of them.
I like most of them more or less, and won't mind giving away my years of life working/learning them. But for what and why?
-- Money? Every profile turns into a decent salary after a certain time. This means i can ride any boat i want.
-- Passion/interest? Now what exactly is this?as i said everything feels doable, given enough time to get a hang of it.
-- Fame? Its rare the developes, testers or other individuals in computer science ever gets a solo credit. Most of the time its either the ceos, the researchers or the company itself. So i guess getting a fame is equal to burning your neighbors by flaunting your cash for most ppl
-- Happy life? Meh, this point is affected by a lot of other factors. Would come back to this point later
- everyday in my feed, there are people showing 6, 7 sometimes even 8 figure salaries. Other people would get inspired with those, but i feel very weird about these.
I never see myself earning those, idk why. Why would someone give me those huge amounts?
How do you find yourself deserving for ythat big ass money? At what point you hit that realisation? Here is a small story :
I did an Android dev course around 2.5 years ago. There was a guy there an year older than me. He was very bad in this, i tell you. Most of the time, i was explaining the concepts to him after class.so last year he graduated, and took a job, We both used to expect a decent salary amount, say x (with me having a little ego that i expect certainly more than him, say x+20% ), but he took a job for half that number , say x/2.
After 1 increment and 1 job shift in 1.5 years, he has now successfully achieved package greater than x. I on the other hand, being still at college and with a lot of bad internship experiences now feel that i won't be getting even x/3 at my start no matter what.
- There is also this thing about people going into more of a management and other non tech roles once they start growing in this field. Why? What did they realized? I am sure not everyone of them would have hit this realization that tech is not what they want to do (which i can't understand why). Maybe its the money and/or happy life expectations?
i have started to feel dumb for not being able to think innovative new ideas and being an average mind :/
And about the happy life, so far its not much happiness for me, and am confused.
I am grateful about the usual things i have (healthy middle class parents, working body, roof , food,etc) , unhappy about the things i don't and see with others (more money, materialistic assets, confidence, siblings, social life, love life, etc) and that's it.
From what i understood of 21 years on this earth is that everyone is running to achieve that list of their desires and wants to move them from todo to done, like trello task. If you can't then keep fighting to achieve or grudgingly accept the fact that you couldn't and be happy about it.
So is that it? That's your happy life goals?2