Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
monkeyboy126054dSoftware development is messed up because software developers are human.
C0D44777254dNo language is perfect, because if it was, devs would stop creating new ones.
Same with frameworks, libraries and any other level you want to talk about.
yellow-dog4616854dThe difference between most of your examples and js frameworks is, that js has n frameworks that solve the exact same problem the exact same way with slightly different apis
The same exists in c# and others.
Js might have a few more, but just pick one you like, and if you do find one, ... just build your own perfect one :D
netikras1980753dYeah I agree with you that all languages have a lory of frameworks each. But I don't think 0.1+0.2 is what a framework miscalculates. It's a language itself that acts unintuitively. And the primary purpose of any programming language is to be as easily understandable [readable/writable] as possible. Read: intuitive. Languages and frameworks are not invented for machines. They are created for people. Machines can easily work in binary. People can't, hence the need for human-readable/writable languages.
So when all your life you are taught that 0.1 + 0.2 is 0.3 and some language claims it's not, it's either broken or poorly designed.
hitko103753d@netikras Also the point isn't in floating-point maths, the point is floating-point maths makes perfect sense when you understand that representing a number with finite decimal digits in a different numerical system doesn't guarantee a finite representation, and complaining about such quirks more often than not just shows lack of basic knowledge and inability to understand the reasoning behind it.
Voxera702952d@hitko good point.
While in c# that specific example gives the right answer, going with a more complex can give the same error in c#.
The problem there is not the language but IEEE specification of float and double that prioritize speed over correctness.
In c#, using decimal is the best way to avoid it, but performance is worse which do affect high volume calculations like science or ai.