Re: 64-bit memory consumption
- From: Eric Grange <egrangeNO@xxxxxxxxxxxxxxx>
- Date: Wed, 17 May 2006 16:37:14 +0200
A nice side effect of future developments is that with more cores, the
scanning of gen 2 (i.e. the big bit) can be done in parallel. As long as
CPU cores and memory keep pace with each other, this shouldn't really
Alas, threaded garbage collecting has many issues, one being that it may never complete. Last time I checked MS actually recommended using the single-threaded garbage collector on servers (the one that will lock all ..Net threads for the time of the GC).
Main issue with threading and GC is the data compaction phase, in which blocks are compacted and references updated, for it to work, it requires that none of the blocks be referred in an active thread (ie. in practice it means that it works only if your threads are sleeping).
Speaking for my own development on .NET, I have found it to be extremely
fast - although apps requires 2-3x more memory than in the past.
Of course, my application domain was application servers, so I didn't need
SSE etc. Time spent in GC never exceeded 2% or so.
There is time spent in GC that won't be affected only to the GC cycles, you get indirect hits in the form of poor cache coherency and cache pollution in the code that uses GC'ed memory.
1) Supply .NET valuetypes that the CLR has intrinsic knowledge about.
Operators on these valuetypes can be internal calls, so the JIT can
exercise extra information on them.
This is practice is the only solution.
2) Give the JIT extensive knowledge of certain idioms produced by the C#
C/C++ compiler have tried for a while, and in practice vectorizing could only be achieved by writing code that did "fit" the optimizer, there are many instance were things got too muddied to be revectorized without risking border effects.
3) A "SIMD sublanguage" in MSIL, perhaps designed along the lines of
Yep, merged with 1.
I definitely expect register usage to improve - MS are hardly going to
I'm not holding my breath :)
The "peephole" optimizations in .NET are directly related to common C#
idioms and the IL the current MS C# compiler emits for those idioms. So,
whether one is implementing an algorithm in C# or implementing a code
generator for MSIL, the code pattern to stick to is the "natural" one
for C#. I think this is only to be expected.
Well, this doesn't really have to be expected, 'old' compiler technology like the one in Delphi doesn't suffer from such tight peepholes, even though its compilation speed is very similar to that of the JITter (it's faster than C#->IL for sure, and from a subjective point of view, I would say it's also faster to go from .pas to x86 opcodes than from IL to x86 opcodes).
The size of Gen0 is directly proportional to the CPU cache size, so
again, as CPU caches get bigger, this won't make a huge difference.
Aye, but I write software for today and next year, for current and last years CPUs, as most people do.
Desktop CPUs with significantly larger caches aren't there yet, and on a server, your cache is shared by multiple applications, and you can't appropriate it all for your application's purpose without affecting the rest.
In a few years, the hardware might have caught up to hide current .Net shortcomings, but who will still be using current .Net in a few years?
Speaking from personal experience, GC has never been a problem on
multi-CPU machines in a server environment, under heavy throughput for
many hours (ASP.NET performs appdomain recycling, so days doesn't
ASP.Net does *not* count as real GC-based application, more like as script language's GC, as all its GC memory gets regularly cleaned up (and IIS core itself isn't using it for its own needs).
As for .net-based services, I've had one case where a .Net service ended up locking up a server we were running on despite having no leaks to speak off: the server had an extended period of CPU load, so GC didn't kick until it was too late and swapping hell had begun. Server was physically stopped after a few hours, .Net service got moved to its own machine.
Never had any trouble with .Net used for scripting-level tasks (as in ASP), and IMO that's all the current GC is really capable of handling reliably: tasks that work for a limited amount of time, and whose memory allocations are guaranteed to be cleaned up as a whole (rather than relying on the GC to do it).
- Re: 64-bit memory consumption
- From: Barry Kelly
- Re: 64-bit memory consumption
- Prev by Date: Re: DevCo New Company Name
- Next by Date: Re: 64-bit memory consumption
- Previous by thread: Re: 64-bit memory consumption
- Next by thread: Re: 64-bit memory consumption