Re: NumPy arrays that use memory allocated from other libraries or tools



On Sep 10, 6:39 am, Travis Oliphant <oliphant.tra...@xxxxxxxx> wrote:

I wanted to point anybody interested to a blog post that describes a
useful pattern for having a NumPy array that points to the memory
created by a different memory manager than the standard one used by
NumPy.


Here is something similar I have found useful:

There will be a new module in the standard library called
'multiprocessing' (cf. the pyprocessing package in cheese shop). It
allows you to crerate multiple processes (as opposed to threads) for
concurrency on SMPs (cf. the dreaded GIL).

The 'multiprocessing' module let us put ctypes objects in shared
memory segments (processing.Array and processing.Value). It has it's
own malloc, so there is no 4k (one page) lower limit on object size.
Here is how we can make a NumPy ndarray view the shared memory
referencey be these objects:

try:
import processing
except:
import multiprocessing as processing

import numpy, ctypes

_ctypes_to_numpy = {
ctypes.c_char : numpy.int8,
ctypes.c_wchar : numpy.int16,
ctypes.c_byte : numpy.int8,
ctypes.c_ubyte : numpy.uint8,
ctypes.c_short : numpy.int16,
ctypes.c_ushort : numpy.uint16,
ctypes.c_int : numpy.int32,
ctypes.c_uint : numpy.int32,
ctypes.c_long : numpy.int32,
ctypes.c_ulong : numpy.int32,
ctypes.c_float : numpy.float32,
ctypes.c_double : numpy.float64
}

def shmem_as_ndarray( array_or_value ):

""" view processing.Array or processing.Value as ndarray """

obj = array_or_value._obj
buf = obj._wrapper.getView()
try:
t = _ctypes_to_numpy[type(obj)]
return numpy.frombuffer(buf, dtype=t, count=1)
except KeyError:
t = _ctypes_to_numpy[obj._type_]
return numpy.frombuffer(buf, dtype=t)

With this simple tool we can make processes created by multiprocessing
work with ndarrays that reference the same shared memory segment. I'm
doing some scalability testing on this. It looks promising :)















.



Relevant Pages

  • Re: Writing an emulator in python - implementation questions (for performance)
    ... from C to Python + pygame. ... You can do clever memory slicing like this with numpy. ... This causes breg and wreg to share the same 16 bytes of memory. ... Well, we're talking about a 128KB 1-byte array, that's the maximum ...
    (comp.lang.python)
  • Re: writing to a file
    ... memory mappings can be more efficient than files accessed using ... However, numpy has a properly working memory mapped array class, ... working with binary data files. ...
    (comp.lang.python)
  • Re: Writing an emulator in python - implementation questions (for performance)
    ... from C to Python + pygame. ... but I'd suggest to index a numpy array rather than structures. ... This causes breg and wreg to share the same 16 bytes of memory. ...  I don't know how to emulate paging in python... ...
    (comp.lang.python)
  • Re: NumPy arrays that use memory allocated from other libraries or tools
    ... useful pattern for having a NumPy array that points to the memory ... created by a different memory manager than the standard one used by ... There will be a new module in the standard library called ... 'multiprocessing'. ...
    (comp.lang.python)
  • Re: A solution for the allocation failures problem
    ... Can you find anything in the standard which *allows* this behaviour? ... it defines how your C program is supposed to behave. ... previously allocated memory available for further allocation. ... says the memory will be made available for further allocation. ...
    (comp.lang.c)