Python not catching MemoryError
Note that because of the underlying memory management architecture (C’s malloc() function), the interpreter may not always be able to completely recover from this situation; it nevertheless raises an exception so that a stack traceback can be printed, in case a run-away program was the cause.
(See the docs)
Usually, you can catch MemoryErrors nevertheless. Without knowing what exactly happens when a MemoryError gets thrown, I'd guess that you might not be able to catch it when shit really hit the fan and there's no more memory there to handle it.
Also, since you may not be able to really recover from it (see above), it probably wouldn't make that much sense to catch it. You should really avoid running out of memory and limiting the amount of memory your program uses by e.g. only allowing a list to have a limited size.
Related videos on Youtube
Comments
-
Eponymous over 1 year
I've wrapped some code that can run out of memory with a try/except block. However, though a MemoryError is generated, it is not caught.
I have the following code:
while True: try: self.create_indexed_vocab( vocab ) self.reset_weights() break; except MemoryError: # Stuff to reduce size of vocabulary self.vocab, self.index2word = None, None self.syn0, self.syn1 = None, None self.min_count += 1 logger.info( ...format string here... )
I get the following Traceback:
File "./make_model_tagged_wmt11.py", line 39, in <module> model.build_vocab(sentences) File "/root/CustomCompiledSoftware/gensim/gensim/models/word2vec.py", line 236, in build_vocab self.reset_weights() File "/root/CustomCompiledSoftware/gensim/gensim/models/word2vec.py", line 347, in reset_weights self.syn0 += (random.rand(len(self.vocab), self.layer1_size) - 0.5) / self.layer1_size File "mtrand.pyx", line 1044, in mtrand.RandomState.rand (numpy/random/mtrand/mtrand.c:6523) File "mtrand.pyx", line 760, in mtrand.RandomState.random_sample (numpy/random/mtrand/mtrand.c:5713) File "mtrand.pyx", line 137, in mtrand.cont0_array (numpy/random/mtrand/mtrand.c:1300) MemoryError
I'm running Python 2.7.3 under Ubuntu 12.04
The
reset_weights
lineself.syn0
is exactly the line I am expecting to raise the exception (it allocates a big array). The puzzling thing is that I can't catch the memory error and do things that will make the array size smaller.Are there special circumstances that result in the
MemoryError
being unable to be caught?-
idanshmu over 10 yearsare you sure those are the lines that throw the exception? try to change the except line to except: and print something there just to make sure those are the right lines.
-
MAJ over 10 years@DannyElly it seems as though the stacktrace shows a call to reset_weights() which is the last line before the except. I expect that reset_weights is being called.
-
MAJ over 10 years@Eponymous is the argument
vocab
in your call tocreate_indexed_vocab
supposed to beself.vocab
? And, to @DannyElly's point, perhaps there's a call toreset_weights
hidden in a the call to create_indexed_vocab? That likely wouldn't matter here, since it's reporting a MemoryError... -
idanshmu over 10 years@Eponymous I would still change the except line and print something there. Oftentimes, we are positive we debug the right block of code when in fact the problem is in a different one. that can very frustrating. When I encounter such BUGs I just add as much prints as I can to make sure that I'm working on the right problem.
-
idanshmu over 10 years@Eponymous are you sure the lines in the try block throws an exception. Did you have prints before and after the suspected line (
self.reset_weights()
) and see only the before print. this is the best that I can help. I know I'm suggesting trivial and stupid pointers but we are all humans and sometimes we are missing obvious things. good luck to u!
-
-
Eponymous over 10 yearsThat hint in the docs is why I wrote this question. I want to know if uncatchable exceptions actually happen (and if it might be happening in my specific case) or if my python-fu was too low and there was a language nuance I was missing.
-
Eponymous over 10 yearsRAM is a shared resource, thus, avoiding running out of memory is impossible. Even if the
resource
module were available on MS-Windows so there was a built-in way to check memory, another process could eat up some of the memory you want between the time you check available memory and the time you allocate it. So the memory allocation cycle for a program that wants to use as much RAM as possible needs to bewhile(allocation fails): adjust requirements; allocate
. In Python, the only way I know to detect a failed allocation is aMemoryError
. -
Eponymous over 10 yearsI'm giving up, so I'll give you the accepted answer. (Yours being the only answer.)
-
endolith over 9 years"by e.g. only allowing a list to have a limited size" But what size? Using large blocks of memory can speed up computations. How do we find the right balance in Python, considering memory differs from machine to machine and minute to minute?