Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "decimal"
-
A new mathematical constant was discovered recently: Bruce's constant
I took some code from the paper and adapted it in python.
def bruce(n):
J = log(n, 1.333333333333333) / log(n, 2)
K = log(n, 1.333333333333333) / log(n, 3)
return ((J+K)-e)+1
gives e everytime for ((J+K)-bruce)+1, regardless of the value of n.
bruce can always be aproximated with the decimal 4.5, telling you how close n can be used to aproximate e (usually to two digits).
Bruce's constant is equal to 4.5099806905005
It is named after that famous mathematician, bruce lee.
You'll start with four limbs and end up with two in a wheelchair!6 -
I want to call out the absolute retard at Oracle who decided that file modes should be read from the envvar called UMASK and be in decimal by default.
They probably never cared about file permissions.1 -
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
Like 4 years ago I worked in a company as IT that used a windows desktop app with SQL Server 2008 (yep that old) to manage their sales, this app was written in WPF, the app was good because it was customizable with reports
One day the boss wanted to keep extra some data in the customer invoice, so they contacted the app developers to add this data to the invoice, so they they did it, but it in their own way, because the didn't modify the app itself(even if it was an useful idea for the app and companies that use it) they just used other unused fields in the invoice to keep this data and one of the field that the boss was interested was currency rate, later I verified in the DB this rate was saved as string in the database
The boss was not interested in reports because he just wanted to test it first and let time to know what the boss will need in the reports, so at the of the year they will contact again the devs to talk about the reports
So is the end of that year and the boss contacted the devs to talk about the reports of the invoices using the currency rate, this rate was just printed in the invoice nothing more, that's what the boss wanted that's what's the devs did, but when asked to do the reports they said they could'nt because the data was saved as string in the DB o_O
Well, that was one the most stupid excuses I ever heard...
So I started to digging on it and I found why... and the reason is that they were just lazy, at the end I did it but it took some work and the main the problem was that the rate was saved like this 1,01 here we use comma for decimal separator but in SQL you must use the dot (.) as decimal separator like this 1.01, also there was a problem with exact numbers, for example if the rate was exactly 1, that data must be saved just 1 in the field, but it was saved as 1,00 so not just replace all the commas with dots, it's also delete all ,00 and with all that I did the reports for my boss and everyone was happy
Some programmers just want to do easy things... -
So I made a couple slight modifications to the formula in the previous post and got some pretty cool results.
The original post is here:
https://devrant.com/rants/5632235/...
The default transformation from p, to the new product (call it p2) leads to *very* large products (even for products of the first 100 primes).
Take for example
a = 6229, b = 10477, p = a*b = 65261233
While the new product the formula generates, has a factor tree that contains our factor (a), the product is huge.
How huge?
6489397687944607231601420206388875594346703505936926682969449167115933666916914363806993605...
and
So huge I put the whole number in a pastebin here:
https://pastebin.com/1bC5kqGH
Now, that number DOES contain our example factor 6229. I demonstrated that in the prior post.
But first, it's huge, 2972 digits long, and second, many of its factors are huge too.
Right from the get go I had hunch, and did (p2 mod p) and the result was surprisingly small, much closer to the original product. Then just to see what happens I subtracted this result from the original product.
The modification looks like this:
(p-(((abs(((((p)-(9**i)-9)+1))-((((9**i)-(p)-9)-2)))-p+1)-p)%p))
The result is '49856916'
Thats within the ballpark of our original product.
And then I factored it.
1, 2, 3, 4, 6, 12, 23, 29, 46, 58, 69, 87, 92, 116, 138, 174, 276, 348, 667, 1334, 2001, 2668, 4002, 6229, 8004, 12458, 18687, 24916, 37374, 74748, 143267, 180641, 286534, 361282, 429801, 541923, 573068, 722564, 859602, 1083846, 1719204, 2167692, 4154743, 8309486, 12464229, 16618972, 24928458, 49856916
Well damn. It's not a-smooth or b-smooth (where 'smoothness' is defined as 'all factors are beneath some number n')
but this is far more approachable than just factoring the original product.
It still requires a value of i equal to
i = floor(a/2)
But the results are actually factorable now if this works for other products.
I rewrote the script and tested on a couple million products and added decimal support, and I'm happy to report it works.
Script is posted here if you want to test it yourself:
https://pastebin.com/RNu1iiQ8
What I'll do next is probably add some basic factorization of trivial primes
(say the first 100), and then figure out the average number of factors in each derived product.
I'm also still working on getting to values of i < a/2, but only having sporadic success.
It also means *very* large numbers (either a subset of them or universally) with *lots* of factors may be reducible to unique products with just two non-trivial factors, but thats a big question mark for now.
@scor if you want to take a look.5 -
What's your idea of a perfect number?
Mine is:
default decimal, prefix for binary, octal, hex
Integers in natural notation with strictly positive exponent
All bases allow natural notation, where the exponent is always a decimal number and represents the power of the base (0b101e3 = 0b101000)
Floats in all bases allow a combination of natural notation and dot notation
Underscores allowed anywhere except the beginning and end for easier reading11