Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Refactoring for 5% performance increase? Fuck, no!
I love refactoring and I'd do it for the sake of refactoring/better code, not for a theoretical tiny bit of better performance. -
@Lensflare it's for my playground/pet-project anyway, it exists to be re-written whenever I find something interesting lol
-
I don't trust it. The VM is highly optimized at this point, whereas native compilation might SUCK.
Would need to see some real-life performance stats and not just cherry-picked benchmarks before I even begin to think about it. -
I know you're joking and exaggerating but if someone in my team said they wanted to refactor the codebase for a theoretical 5% COMPILATION perf boost (Presumably this has zero effect on the user) I would've forced them to measure how much time they actually spend waiting for compilation 🤣
Besides - if this is a new feature that may be baked into the dotnet core - is it really worth doing any refactoring to have it for 1.5 years before everyone else? -
the biggest pro of this IMHO is reduced application size, especially when combined with trimming. which only matters if you are under very strict storage requirements.
-
@tosensei @AlgoRythm My assumption was that if we're getting rid of the VM, and using direct memory access (pre-compiled for specific instruction set, nothing on-the-fly), we'd be relying much less on GC (would GC even exist then..?) and less virtual <--> real changes n it'd give a massive performance boost but the documentation https://learn.microsoft.com/en-in/... doesnt mention much about it
Doesnt it make it similar to "deployment with runtime assemblies included"? like it aint MASSIVE then -
@jiraTicket ofc not compilation lol my time isn't THAT precious yet :p
I assumed runtime benefits coz we'd be removing the VM overhead entirely, n if you're deploying serverside (so you know the machine instruction set), having a native, non-VM based backend code seems like it'd heavily reduce system load and increase response times -
@azuredivay Well, looking at it, I'm not so sure it really gets rid of the VM.
.Net code execution is pretty complicated these days. A little-known fact is that C# code gets compiled down to native under certain conditions when running on the client machine. It looks like AoT (ahead of time) compilation does the same thing, but at compile time and not runtime.
Certainly what DOESN'T happen is it gets compiled to a native program like c or c++ does. It still seems to include the runtime and all associated bits, but AoT compiled as much as possible to reduce startup time and other costs. (and all the .net ecosystem is compiled into that single binary) -
@azuredivay your assumption is wrong, garbage collection will happen exactly the same.
even if you compile to native asm instead of bytecode, you still got to keep track of when you stop using stuff and clean it up.
the only way to not have garbage collection built in is to make sure you collect your garbage yourself. -
@tosensei There is another way, which apparently no language except for Swift uses: ARC.
-
@Lensflare ARC is, in the end, just a specific _way_ of garbage collection. with its own strengths and weaknesses.
-
@tosensei you framed GC as something that you can‘t avoid at runtime. In this regard, ARC is not GC.
-
@Lensflare ARC is still a method of GC that is running on runtime (because everything that's running is happening on runtime).
it's just that the cleanup happens instantly when a resource is no longer needed, instead of periodically in the background, or when memory pressure gets too high.
but it's still garbage collection. -
@tosensei It‘s essentially malloc/free compiled into your code, automatically.
You could argue if you consider it runtime or not, but if you say that it‘s runtime, you have to conclude that manual memory management in C and C++ is runtime as well.
And that invalidates your argument. -
@Lensflare not really, because the thing is:
MANUAL memory management is something YOU do.
garbage collection, no matter which flavor, is something the toolset you're working with does FOR you.
it's the difference of "things are happening in your code" or "things are happening automatically _outside_ of your code".
the question is "do YOU, as the programmer, have to worry about it". -
@tosensei I agree with that, but your original claim was (as I understood it) that GC is opposed to manual, explicitly written code AND involves the runtime (runtime being the part that I disagree with).
I get this from what @AlgoRythm said:
> Certainly what DOESN'T happen is it gets compiled to a native program like c or c++ does. It still seems to include the runtime and all associated bits, but AoT compiled as much as possible to reduce startup time and other costs. (and all the .net ecosystem is compiled into that single binary)
And from your immediate disagreement and the reasoning why. -
azuredivay119211h@tosensei as @Lensflare said, a coordinated/automated malloc-free-ish behaviour directly in real-memory is what I thought they're going with
Coz just having runtime readily available with the DLL has always been supported with their "portable deployment"
Msft adds dynamic/runtime-changes/imports limitations in Native/AOT so maybe thats the direction they're aiming for and it's just not done yet
if they do manage that, it'd essentially be devoid of GC, or if you want to be strict with words, realtime GC?
Related Rants
Did anyone here use .NET AOT/Native?
since it isnt CLR/VM dependent, theoretically it should be more performant, even if with a few limitations
my "Even if 5% perf increase, it's worth a code-refactor" senses are tingling 💀 someone give me a push to take the plunge
random
.net
native