Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "ieee-754"
-
What grinds my gears:
IEEE-754
This, to me, seems retarded.
Take the value 0.931 for example.
Its represented in binary as
00111111011011100101011000000100
See those last three bits? Well, it causes it to
come out in decimal like so:
0.93099999~
Which because bankers rounding is nowstandard, that actually works out to 0.930, because with bankers rounding, we round to the nearest even number? Makes sense? No. Anyone asked for it? No (well maybe the banks). Was it even necessary? Fuck no. But did we get it anyway?
Yes.
And worse, thats not even the most accurate way to represent
our value of 0.931 owing to how fucked up rounding now is becaue everything has to be pure shit these days.
A better representation would be
00111101101111101010101100110111 <- good
00111111011011100101011000000100 < - shit
The new representation works out to
0.093100004
or 0.093100003898143768310546875 when represented internally.
Whats this mean? Because of rounding you don't lose accuracy anymore.
Am I mistaken, or is IEEE-754 shit?4 -
FUCK IEEE 754.
I've always thought that javascript's problem with floating points was just a good anecdote, but it couldn't have serious consequences in real life programming.
Until I've been stuck half an hour with a bug just because (2.8-0.8)%2 was falsy! FUCK, why don't decide to switch to a decent codification of numbers? Fuck them and fuck all programming languages like this5 -
I'm at uni learning about floating point numbers and IEEE 754 and its so different to what I learnt at A-Level and it seems that using twos compliment floating point numbers is more efficient than storing numbers than IEEE 754 as IEEE 754 seems to use sign and magnitude. So why do we use IEEE 754?1