Re: 70% [* SPAM *] Re: multiprocessing.Queue blocks when sending large object
- From: boB
- Date: 5 Dec 2011 02:57:01 -0600
On Mon, 5 Dec 2011 09:02:08 +0100, DPalao <dpalao.python@xxxxxxxxx>
El Martes Noviembre 29 2011, DPalao escribió:
I'm trying to use multiprocessing to parallelize a code. There is a number
of tasks (usually 12) that can be run independently. Each task produces a
numpy array, and at the end, those arrays must be combined.
I implemented this using Queues (multiprocessing.Queue): one for input and
another for output.
But the code blocks. And it must be related to the size of the item I put
on the Queue: if I put a small array, the code works well; if the array is
realistically large (in my case if can vary from 160kB to 1MB), the code
blocks apparently forever.
I have tried this:
but it didn't work (especifically I put a None sentinel at the end for each
Before I change the implementation,
is there a way to bypass this problem with multiprocessing.Queue?
Should I post the code (or a sketchy version of it)?
Just for reference. The other day I found the explanation by "ryles" on
his/her mail of 27th aug 2009, with title "Re: Q: multiprocessing.Queue size
limitations or bug...". It is very clarifying.
After having read that I arranged the program such that the main process did
not need to know when the others have finished, so I changed the process join
call with a queue get call, until a None (one per process) is returned.
Why do people add character like [* SPAM *] to their subject
lines ?? Is it supposed to do something ?? I figured since
programmers hang out here, maybe one of you know this.
- Prev by Date: Can't install pycrypto
- Next by Date: Spam in subject lines (Re: 70% [* SPAM *] Re: multiprocessing.Queue blocks when sending large object)
- Previous by thread: 70% [* SPAM *] Re: multiprocessing.Queue blocks when sending large object
- Next by thread: 70% [* SPAM *] Re: Re: multiprocessing.Queue blocks when sending large object