Details
-
AboutHave been programming since 1980.
-
SkillsC#, Typescript, React, Javascript, Sql, (PHP, Turbo Pascal, Visual basic, GW Basic, Bash, c , ...)
-
LocationSweden
Joined devRant on 6/8/2016
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
Some further numbers for thought
Its estimated that about 700 million people got covid, and 7 million of them died from covid, that is 1 %.
Around 7 billion people receive at least one dose of covid vaccine. While I did not find any global numbers, for EU they list around 11 000 cases where complications around the time of vaccination caused death but its not clear if the complication are directly related to the vaccine.
But if we assume it is and extrapolate from that we are looking at maybe a few hundred thousand deaths.
That would mean less than 0.05% risk of dying from the vaccine.
So 1% risk vs 0.05% risk.
I would go for the second every time. -
@kanyewest If had kept up on the subject you would have known that yes, the new mra vaccines CAN be tailored for a new virus in a matter of weeks, and the underlying breakthrough is not actually new, they have been working on the technology for over 15 years but since classical vaccines have been good enough no one wanted to spend the millions it would cost to actually start manufacturing using the new tech.
Then came covid, a new virus where we had no working vaccine to manufacture and the scientists behind the mra tech managed to probe that their tech could produce a working vaccine in just 6 weeks, it was still not approved so it went through months of testing and they had to build the manufacturing facilities to mass production and then they started vaccinating people which is why it took 8 months before it became readily available.
Next time a completely new virus shows up they might start distributing a vaccine within just a few months before it gains global spread and most of the world might even not have to vaccinate since it gets stopped in time.
If you are going to criticize science you really should make sure to read up on both sides of the story because you ignorance shows that you only consume the news that feeds you misconceptions. -
@kanyewest no they where not vaccinated since the vaccines had not started to be distributed yet.
-
@kanyewest I had three friend die from covid.
And so far, despite the massive vaccination not a single one have dies from the vaccine.
So I am all fore vaccines, they save dar more lives than they harm. -
Measles can cause permanent disabilities in children even if they survive. Vaccines prevents that.
Before vaccines measles was one of the most common causes if disabilities.
Refusing vaccines is child cruelty. -
@pandasama in other words, your higher ups have already promised a deadline to their higher ups and do not dare to admit they did wrong :/
-
@netikras if the higher ups set the expectations its not much you could do, lowering expectations in that case is not an option.
Sounds very much like unrealistic expectations and not enough resources. -
New goal. Beat that time :D
-
@kobenz well, it does sort of have coding …
-
@c3r38r170 exactly, you lend it because you expect something in return, might be just continuing friendship.
On the other hand, a friend that only stays as a friend because they can borrow money is not a friend but an employee ;) -
A mainframe as already mentioned is optimized for io and storage but it also contains lots of specialized chips for encryption, zip/unzip and other common processing tasks.
They are also designed around redundancy and uptime to make sure things keep running.
You can swap out almost anything in runtime, even hardware like cpu and memory by shifting running apps to other cores.
This is also why they are still in use in many critical implementations.
You can so the same in the cloud or with your own clusters BUT you would ned to built it into your applications.
With a mainframe the application just keeps running. -
@alemantrix I never interviewed for google but I did for amazon and my take is that they actually care less for your language than for how you approached the problem and at least with amazon how you reason about the problem since they where there in the room asking questions about why I did thing or if I had thought about everything.
I did whiteboard code , not really building real code but just the concepts on how to solve the task.
At least in the real interview
The first step was an online coding test similar to codewars where you submitted something that had to compile and solve a problem but you had 3 or 4 languages to choose from.
I think their thoughts are that if you can solve problems, they can always teach you a new language, but teaching someone how to think right is way harder ;) -
@atheist well, its only ever going to be as fast as the compiler can do, just as C++ or Rust, and C is designed to be platform agnostic meaning there have to be some tradeoff for more generic constructs.
The compiler will attempt to use the most optimized instructions but it can only do so much.
The closer to assembly you are the more you risk that your code limits what rewrites the compiler can make.
That is one of the ideas behind rust, after safety, that using more high level constructs to define intent will allow the compiler more freedom to use faster constructs.
And the bigger the code base and the more developers involved the more benefit you will get from better compiling.
Sure the very best dev can beat the compiler but realistically most companies will mot have devs that out perform the compiler anyway. -
@jeeper I know
-
@Lensflare are you referring to golang of coding in general ;)
-
@max19931 this is as others have said already going on, multiple cores, specialized cores, new algorithms using said cores. Going back to c++ for speed is not the only option, if that was the case, why c++ and not straight to assemble or machine code?
The answer is development cost and complexity.
More modern languages use cleaver language constructs to give the intent and then allows the compiler to optimize and use specialized chipsets or parallelism.
Yes you can do that manually but it add complexity which adds cost and most software does not motivate that added cost, especially since few algorithms need faster single thread performance, we instead do more different things or process more data that can be done in parallel.
So adding cores works. -
@hjk101 Ada is probably used within the us government and gets special treatment ;)
-
@netikras I imagined you comment in a different light :D
You accidentally kicking the cat of the bed and it decides to retaliate :P -
@Lensflare My guess is that kotlin is not big enough to make the list
-
@atheist Yes there are ample tools in modern c++ for more safe code, BUT you can just as easily use unsafe constructs if you do not know the difference.
With for example C#, you have to explicitly say you want to use unsafe code.
As for speed, 98% of all code does not need to focus that much on speed and in many cases if you do need it you can still write those parts in whatever language you need and the rest in a more safe one. -
Until the LLM them self manages to convince me they are in fact sentient I will keep sorting them under the same nonsentient as phone sellers, politicians and rocks :P
-
@b2plane Thats the biggest problem with any “investment “ unless you can afford to wait for profit it is just gambling.
In that case its the same with stocks, you never invest money you cannot afford to loose.
And the difference between crypto and stocks is that companies have products and report a lot, if you spend time going through that you can make a very good guess if the company is solid and will grow or not.
The closest thing in stocks to crypto is blanking where you sell or buy stock now for later delivery and hope that by the time the transaction goes through the real price will have changed in your advantage. But that is also a good way to quickly loose money. -
@b2plane investing in crypto is in many regards similar to investing in lottery tickets, you gamble.
-
Never ever change production unless you have time to verify before weekend.
With the exception of if something is already broken. -
To many.
Had a webshop I wanted to shop from that not only restricted password length to 8 (yes eight) characters.
This signup form allowed more characters and the login form always responded with bad login for any password linger than 8 so trying to figure out why login failed took two separate contacts with customer service.
Thats next level stupidity, not only bad password policy but a broken implementation that makes it even worse to use -
@MammaNeedHummus Most mergers are single comits and if no one else messed with the same file a plain merge is enough.
But whenever a merge has more than one comit I think a quick rebase not only makes sure any merge conflicts are handled locally in the branch and not during the actual merger reducing the risk of an error leaking into some other branch.
It also makes sure that if we fo need to pull the merge we do not have all comits in one place.
Its rare yes, but rebasing is also usually very fast.
The only time it takes more time is if there are merge conflicts and thats also where its most useful, so its an easy guard I at least feel is worth the time.
Squashing is not enforced but rather up to the dev but if they do I always ask them to keep any commits with relevant messages.
That also helps in case someone else get a merge conflict.
Especially as the blame feature makes it very clear why a certain line was changed and when.
But as long as the team find a solution they feel works I think that is fine. -
@MammaNeedHummus I rarely squash commits, I do rebase so all commits are in order.
Squash is mainly used to remove nonsensical commits, like fixes that are fund before merging that serve no purpose in the main log or when you need to pause a branch for some reason and then resume. -
I prefere rebase before merge since in the case of merge conflict its easier to solve and if you change logic, getting changes intermixed can make it harder to follow.
Most times it does not matter no, but on the occasions it does having all commits fresh at the end helps.
If you merge often the problem is very much reduced, and in a small team the same. But with 10 devs that could end up poking around in the same file it can be a mess (we are refactoring your reduce the risk of multiple changes to the same file but in the process the risk is higher until we are done ) -
@jestdotty that is easily solved, just go overboard with explanations, if they only wanted to get you to do the work they will find someone else to pester after a while ;)
-
Do you have any resources created?
Also, there should be a page listing what is causing charges.