Editing Shared Resources: "Optimistic Locking"
The idea is simple: Only try to lock the resource in the moment you need to write data. Don’t write data if the resource has changed.
This type of lock becomes more risky as more workers are added, and as time to compute changes increases. Ideally the part where a lock is acquired is only going to last few a few milliseconds.
The procedure is simple. First, get the data from redis. Next, do some processing. Lastly, check if the data changed, and either write the data or abandon changes.
But there’s an obvious issue with this: not all applications were built for simple “rollbacks”, especially complex codebases, many distributed codebases will send events while processing data, and in general changes can’t always be rolled back easily. It’s easy to imagine how this becomes more likely to return 500 when either: (1) the time between the two “GET SESSION” commands increases, or (2) workers accessing the resource increases. So to remove this risk we need a solution to check if a lock can prevent other workers from working on outdated data.