I have this code
def writer_entrypoint(shared_data: Array):
while True:
ind, new_value = get_data() # It takes up to 100 milliseconds
with shared_data.get_lock():
shared_data[ind] = new_value
def reader_entrypoint(shared_data: Value):
while True:
with shared_data.get_lock():
data = np.frombuffer(shared_data.get_obj()).copy()
# Process the data... but faster
Both functions are processes entry points. There are multiple readers and a single writer. The readers are much faster than the writer. And the writer waits a lot of time for the lock (starving)
So the reader uses "old data" but the writer can't update the data.
I want the writer will have a higher priority for updating the data. I mean, if there is a reader that uses the lock, wait for it. But when it finishes and the writer and several readers wait for the lock, let the writer use it first.
How can I implement such a mechanism?
source https://stackoverflow.com/questions/71067784/priority-lock-for-multiprocessing-shared-memory
Comments
Post a Comment