-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Out Of Memory #65
Comments
Karl,
Remind me how you’re running TouchTerrain. Are you using a standalone .py or notebook file? Or are you running your own server? I ask b/c running a notebook using Colab the (free) Runtime instance has 12.68 Gb of memory.
Re setting max_cells_for_memory_only to 0 this is supposed to help with low memory hardware but it doesn’t affect a whole bunch of things that still need to held in memory e.g. it has to construct a class called grid (see grid_tesselate.py) to store the geometry of the model. Once constructed, the next step is to create STL or OBJ files from it that I think should go straight to disk but it’s possible that there’s some memory overhead required.
I see that there’s a block just above the def for the grid class that set up profiling and grid itself has a @Profile decorator (currently commented out ofc). IIRC I used that profile to figure out in which function/method but this could be reconfigured to instead give you the current memory footprint (I think). But, even if profiling here or elsewhere would point to some memory hogging I honestly don’t know how to “fix” those.
Now granted I haven’t done a thorough analysis of which parts of a lot of memory but I do remember that running a server instance on a virtual PC with 2 Gb was a struggle. It could usually run 1 big job but it would die with 2 or more jobs and so we’re now running it with 4Gb.
I know none of this is a direct solution to you problem but if give me more details about what you’re trying to do I’m happy to dive back into profiling to see if there’s something I could potentially re-write/optimize for saving memory.
Cheers
Chris
On Jan 6, 2023, at 15:30, Karl Schmaltz ***@***.******@***.***>> wrote:
I am trying to run this on a computer with limited resources and I keep getting crashes because I run out of memory. I saw the arg called max_cells_for_memory_only which from what I understand will use a temp file instead of memory if the cells are bigger than that value. I set that value to 0 so it will always use temp files. When I run something I see it get into the expected if statements for the temp file stuff but when I watch the memory usage of the process it still goes way up, almost to 1 gig in my example.
Is there something else I have to do to disable in-memory processing?
—
Reply to this email directly, view it on GitHub<#65>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEYDF5IJAY4LDPAWVJ54YDDWRCFIHANCNFSM6AAAAAATTPIP5A>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
Chris Harding
Associate Professor
Department of Geological & Atmospheric Sciences
Touchterrain.geol.iastate.edu<http://Touchterrain.geol.iastate.edu>
|
Thanks for the response. I've been looking over the code and experimenting with this over the last few days. I'm running this directly from Python on a computer that has limited resources. I found that the memory issue is with the construction of the My idea to fix this was to combine multiple steps together. Currently, it constructs the grid, then gets all the triangles from the grid, then starts writing the STL. I saw no reason that these steps can't be combined together so that the STL gets written (in fixed sized chunks) as the grid is constructed. I hacked together a version that was able to do that and it seems to be working well. I was able to create a large STL (100Mb) on a computer that has 512Mb of memory. My code is nowhere near PR worthy but I just wanted to let you know that this route is possible and has a major memory reduction. |
Karl,
This is great! I guess I never looked at the process that way. I definitely would like to implement that for my version. Do you want to tinker with it some more? Or just send me your grid_tesselate.py “hack” (Assuming that’s the only file you needed to modify?) and I’ll take it from there?
Cheers
Chris
On Jan 10, 2023, at 10:07, Karl Schmaltz ***@***.******@***.***>> wrote:
Thanks for the response. I've been looking over the code and experimenting with this over the last few days. I'm running this directly from Python on a computer that has limited resources. I found that the memory issue is with the construction of the grid (which contains the memory allocated for all cell, quad, and vertex).
My idea to fix this was to combine multiple steps together. Currently, it constructs the grid, then gets all the triangles from the grid, then starts writing the STL. I saw no reason that these steps can't be combined together so that the STL gets written (in fixed sized chunks) as the grid is constructed. I hacked together a version that was able to do that and it seems to be working well. I was able to create a large STL (100Mb) on a computer that has 512Mb of memory.
My code is nowhere near PR worthy but I just wanted to let you know that this route is possible and has a major memory reduction.
—
Reply to this email directly, view it on GitHub<#65 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEYDF5PQF7AIQESLWEMDSCDWRWCN5ANCNFSM6AAAAAATTPIP5A>.
You are receiving this because you commented.Message ID: ***@***.***>
Chris Harding
Associate Professor
Department of Geological & Atmospheric Sciences
Touchterrain.geol.iastate.edu<http://Touchterrain.geol.iastate.edu>
|
I messed around with it a bit more and got it working for STLa and STLb but not OBJ quite yet. I cleaned it up slightly. I put it as a PR so you can easily see the DIFF. #66 |
Karl,
I’ve modified your draft code and also made it working with obj files. Also did a good amount of cleanup and modernization. Could I ask you to run some tests? It should still be the same performance as yours. BTW there’s a (currently hardcoded) caching value for defining after how many memory-cached cells the cache is written to disk (chunk_size in write_buffer_to_file() ). This is currently 100,000 triangles. I’ve no good metrics how big a memory footprint that creates but if you wanted I could make that user control-able.
Cheers
Chris
On Jan 14, 2023, at 19:18, Karl Schmaltz ***@***.******@***.***>> wrote:
I messed around with it a bit more and got it working for STLa and STLb but not OBJ quite yet. I cleaned it up slightly. I put it as a PR so you can easily see the DIFF. #66<#66>
—
Reply to this email directly, view it on GitHub<#65 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEYDF5NHMUJVTLHN4YP6GBDWSNF7FANCNFSM6AAAAAATTPIP5A>.
You are receiving this because you commented.Message ID: ***@***.***>
Chris Harding
Associate Professor
Department of Geological & Atmospheric Sciences
Touchterrain.geol.iastate.edu<http://Touchterrain.geol.iastate.edu>
|
I am trying to run this on a computer with limited resources and I keep getting crashes because I run out of memory. I saw the arg called
max_cells_for_memory_only
which from what I understand will use a temp file instead of memory if the cells are bigger than that value. I set that value to 0 so it will always use temp files. When I run something I see it get into the expectedif
statements for the temp file stuff but when I watch the memory usage of the process it still goes way up, almost to 1 gig in my example.Is there something else I have to do to disable in-memory processing?
The text was updated successfully, but these errors were encountered: