Quantcast
Channel: Need for Mutex while using Shared Memory (C++) for hardware applications - Stack Overflow
Viewing all articles
Browse latest Browse all 4

Need for Mutex while using Shared Memory (C++) for hardware applications

$
0
0

I have a sensor that is writing data to shared memory in a thread at n Hz (say 10Hz=10 times per second). A separate thread is reading this data and using it to acquire some result. The frequency of the reader thread is different. It can be slower e.g 8 times a second or faster e.g 15 times a second depending on what is being calculated. The reader thread just reads the data from shared memory. It does not modify the data (only process it to get some result) and does not write anything to shared memory. The entire process works very neatly. I do not care about the synchronization since the reader just reads off what ever is in shared memory when it needs to (it polls for the data). If between two reads, the contents of the shared memory changes, the reader uses the new data. If between two reads the contents of the shared memory does not change (if the reader is much faster than the writer) then the reader just uses whatever data is in the shared memory.

Now my colleague is telling me to synchronize access to shared memory using a mutex but I disagree. Reason is that if I use a mutex to control access, the frequency of the writer writing to shared memory will be somewhat reduced (when the reader thread has locked the mutex and writing). In the future we will have more reader threads and I am afraid that the frequency with which the writer thread can write to shared memory will decrease further since there will be two more threads competing for the mutex.

I know about race conditions etc. but I feel race conditions and the numerous examples given on SO as well other sites consider scenarios different from mine: Example when two threads are reading and processing the bank balance and one thread is slower or faster in reading and the balance amount ends up being erroneous...resulting in $2000 instead of $1000. However, in my case, the "bank balance" - the data to be shared is generated by a sensor. Any changes in the value are due to physical reasons and the data value to be shared will never jump such a big amount.

More details: The sensor is a distance measuring sensor. It measures distance 10 times in a second. Say the distance at t=1.0s was 10cm and it was written to memory. The reader reads the shared memory which says 10cm. Now if the real distance happens to change while the reader is reading or processing the data it will be 10.1cm or since the distance will never jump by large amounts. On the next poll, the reader will then read the distance of 10.1cm (assuming object is them stationary.) In this way, my writer thread can write as fast as possible without waiting for a mutex to be unlocked.

Is my reasoning flawed? The only problem I can imagine is if my writer and reader threads attempt to access memory at exactly the same time. But then, the scheduler is supposed to switch between instructions, right? That is, its just pseudo parallel processing, correct? This means they both cannot access the memory at the same time, correct?


Viewing all articles
Browse latest Browse all 4

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>