Overview
Examples
Screenshots
Comparisons
Applications
Download
Documentation
Tutorials
Bazaar
Status & Roadmap
FAQ
Authors & License
Forums
Funding Ultimate++
Search on this site
Search in forums












SourceForge.net Logo
Home » Community » U++ community news and announcements » Core 2019
Re: Core 2019 [message #51934 is a reply to message #51926] Fri, 21 June 2019 19:58 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Fri, 21 June 2019 03:23
Anyway, peak and final memory profiles would be nice to know... Smile

(Although one problem is that only calling thread's memory is in the profile).

I've attached a profile ...


Regards,
Novo
Re: Core 2019 [message #51935 is a reply to message #51934] Fri, 21 June 2019 20:28 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
My version of libc:

$ /lib/x86_64-linux-gnu/libc.so.6
GNU C Library (Ubuntu GLIBC 2.29-0ubuntu2) stable release version 2.29.


Regards,
Novo
Re: Core 2019 [message #51936 is a reply to message #51935] Sat, 22 June 2019 02:51 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
U++, RSS, collected as "top -d 0.5".

index.php?t=getfile&id=5864&private=0
  • Attachment: upp.png
    (Size: 26.27KB, Downloaded 192 times)


Regards,
Novo
Re: Core 2019 [message #51937 is a reply to message #51936] Sat, 22 June 2019 02:53 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
glibc, RSS, collected the same way.

index.php?t=getfile&id=5865&private=0
  • Attachment: glibc.png
    (Size: 42.89KB, Downloaded 187 times)


Regards,
Novo
Re: Core 2019 [message #51938 is a reply to message #51937] Sat, 22 June 2019 10:23 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Well, from what I see, the real difference is that U++ "keeps" the address space...

Maybe you can strace both allocators and grep for mmap / munmap?

Also, profile after the CoWork would be nice to know.

Mirek

[Updated on: Sat, 22 June 2019 10:57]

Report message to a moderator

Re: Core 2019 [message #51939 is a reply to message #51938] Sat, 22 June 2019 17:49 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Sat, 22 June 2019 04:23
Well, from what I see, the real difference is that U++ "keeps" the address space...

Maybe you can strace both allocators and grep for mmap / munmap?

Also, profile after the CoWork would be nice to know.

Mirek


Attached.
Just mmap and munmap do not tell much.
glibc calls brk a lot as well.
Profile after the CoWork was posted in this message.


Regards,
Novo
Re: Core 2019 [message #51940 is a reply to message #51939] Sat, 22 June 2019 22:22 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Sat, 22 June 2019 17:49

Profile after the CoWork was posted in this message.


I believe that is just peak profile, but I might be looking at it wrong.

strace logs seem weird a bit - I do not see enough allocations (either way) for 4GB.
Re: Core 2019 [message #51941 is a reply to message #51940] Sat, 22 June 2019 23:44 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Sat, 22 June 2019 16:22

I believe that is just peak profile, but I might be looking at it wrong.

What is "Profile after the CoWork"? I do know only about MemoryUsedKb(), MemoryUsedKbMax(), PeakMemoryProfile() ...
In this log-file all three of them are called after CoWork.

mirek wrote on Sat, 22 June 2019 16:22

strace logs seem weird a bit - I do not see enough allocations (either way) for 4GB.

My bad. I didn't follow the threads ...
New logs are attached.
  • Attachment: 02.zip
    (Size: 419.52KB, Downloaded 11 times)


Regards,
Novo
Re: Core 2019 [message #51942 is a reply to message #51941] Sun, 23 June 2019 08:21 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Sat, 22 June 2019 23:44
mirek wrote on Sat, 22 June 2019 16:22

I believe that is just peak profile, but I might be looking at it wrong.

What is "Profile after the CoWork"? I do know only about MemoryUsedKb(), MemoryUsedKbMax(), PeakMemoryProfile() ...
In this log-file all three of them are called after CoWork.


"MemoryProfile()" Smile

The differnce is that PeakMemoryProfile returns snapshot at point where maximum memory is allocated, while "MemoryProfile" returns current status.

Mirek
Re: Core 2019 [message #51943 is a reply to message #51941] Sun, 23 June 2019 09:55 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Sat, 22 June 2019 23:44

My bad. I didn't follow the threads ...
New logs are attached.


Well, GCC definitely unmaps regions before mapping them back again, so the original hypothesis holds.

Now the interesting question is "are we doing something wrong?". Perhaps we do... Maybe 224MB is too big chunk and it is true that it will waste swap space....

Mirek
Re: Core 2019 [message #51944 is a reply to message #51943] Sun, 23 June 2019 10:19 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
It would probably be worth to experiment with HPAGE constant... Can be any number >16.

Mirek

Re: Core 2019 [message #51946 is a reply to message #51942] Sun, 23 June 2019 21:26 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Sun, 23 June 2019 02:21

"MemoryProfile()" Smile

Attached Embarassed


Regards,
Novo
Re: Core 2019 [message #51947 is a reply to message #51944] Mon, 24 June 2019 06:13 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Sun, 23 June 2019 04:19
It would probably be worth to experiment with HPAGE constant... Can be any number >16.

I'm afraid I cannot afford to spend time on U++'s allocator anymore.
I'm switching to the glibc's one for the time being.
It would be great to see jemalloc and tcmalloc integrated into U++ because they are supposed to be better at avoiding fragmentation.


Regards,
Novo
Re: Core 2019 [message #51949 is a reply to message #51946] Mon, 24 June 2019 09:22 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Sun, 23 June 2019 21:26
mirek wrote on Sun, 23 June 2019 02:21

"MemoryProfile()" Smile

Attached Embarassed


Huge block count 4, total size 1951 KB
Huge fragments count 424, total size 4810724 KB

Basically means that all the memory is really freed, as expected. Fragments list also reveals that fragmentation is not really too bad (a lot of big blocks).

Based on this, I do not see any defect or deficiency, except the choosen one (keep the memory).

Mirek
Re: Core 2019 [message #51950 is a reply to message #51949] Mon, 24 June 2019 17:36 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Mon, 24 June 2019 03:22

Based on this, I do not see any defect or deficiency, except the choosen one (keep the memory).

Well, I do not think that taking from the system 4.6Gb when the app is allocating ~400Mb is acceptable. Cool
Default behavior of glibc's allocator is optimized for huge enterprise-level apps, but with some manual tweaking it performs fine with small apps. jemalloc would perform even better, I believe.


Regards,
Novo
Re: Core 2019 [message #51951 is a reply to message #51950] Mon, 24 June 2019 18:42 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Mon, 24 June 2019 17:36
mirek wrote on Mon, 24 June 2019 03:22

Based on this, I do not see any defect or deficiency, except the choosen one (keep the memory).

Well, I do not think that taking from the system 4.6Gb when the app is allocating ~400Mb is acceptable. Cool
Default behavior of glibc's allocator is optimized for huge enterprise-level apps, but with some manual tweaking it performs fine with small apps. jemalloc would perform even better, I believe.


Well, app actually IS allocating ~4.5 GB at the peak, that is what all indicators show - or have I got that wrong?

So it is really a question of priorities. Do we want to unmap that memory or keep it for the future use as mmap/munmap are quite expensive calls? This was the question I have asked and at that time the answer was: "we want to keep it". Now I am not so sure... Smile

Looks like we need MemoryOptions and/or MemoryShrink...

Anyway, back to drawing board...

Mirek
Re: Core 2019 [message #51952 is a reply to message #51951] Mon, 24 June 2019 19:36 Go to previous messageGo to next message
Novo is currently offline  Novo
Messages: 890
Registered: December 2006
Experienced Contributor
mirek wrote on Mon, 24 June 2019 12:42

Well, app actually IS allocating ~4.5 GB at the peak, that is what all indicators show - or have I got that wrong?

Top with default options gives me 4.6GB, 4.5GB or 4.4GB. It depends on a run. Rolling Eyes


Regards,
Novo
Re: Core 2019 [message #51953 is a reply to message #51952] Mon, 24 June 2019 20:15 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Mon, 24 June 2019 19:36
mirek wrote on Mon, 24 June 2019 12:42

Well, app actually IS allocating ~4.5 GB at the peak, that is what all indicators show - or have I got that wrong?

Top with default options gives me 4.6GB, 4.5GB or 4.4GB. It depends on a run. Rolling Eyes


Of course, as it is MT and allocation/deallocation order is not fixed...
Re: Core 2019 [message #51956 is a reply to message #51947] Wed, 26 June 2019 08:52 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
Novo wrote on Mon, 24 June 2019 06:13
mirek wrote on Sun, 23 June 2019 04:19
It would probably be worth to experiment with HPAGE constant... Can be any number >16.

I'm afraid I cannot afford to spend time on U++'s allocator anymore.
I'm switching to the glibc's one for the time being.


Redesign finished. Address space is now returned to OS when possible.

There is now MemorySetOptions to finetune behaviour. There are 4 parameters, low values should in general result in slower allocator with less memory consumption...

Mirek
Re: Core 2019 [message #51963 is a reply to message #51956] Thu, 27 June 2019 09:44 Go to previous messageGo to previous message
mirek is currently offline  mirek
Messages: 12051
Registered: November 2005
Ultimate Member
In addtion to USEMALLOC there is now HEAPOVERRIDE flag that kicks out all Memory* definitions, allowing you to replace the whole allocator with something else than malloc.
Previous Topic: ide: Assist / Display/apply patch
Next Topic: ide: pkg_config support
Goto Forum:
  


Current Time: Sun Oct 20 19:01:17 CEST 2019

Total time taken to generate the page: 0.01066 seconds