This project is read-only.
Managing replicated data in a service that uses the g.ReadLock/g.WriteLock API (easy)
This is intended to be a similar project to the g.OrderedSend+g.Flush one. Design an experiment that will let you measure the costs of accessing data in a group with and without locks. You will need to understand the way the locking API works in Isis2, and once you understand it, can design an application that replicates data and does reads and updates with tunable probability (e.g. you'll be able to look perhaps at 95% reads, 50% reads, etc). Measure the added costs associated with using locking.
Graph these costs, looking at the cost as a function of the size of the group and the rate of operations it is asked to do.

We say that locks have a "granularity of coverage" if one lock might cover multiple objects. For example, suppose that a system has 100 objects named X00, X01, etc, but when locking them, you simply lock "X". Here the granularity of the single lock covers 100 objects. In contrast, a lock on each of the objects would have a very fine granularity: a coverage of 1. Locks with coarse granularity allow you to request fewer locks, but on the other hand if you plan to update any X object, you would need a write lock on "X", hence more blocking (delays) would occur compared to locking just the X object you plan to update. Thus a tradeoff occurs. Can you find a way to measure this effect and to graph it?

Last edited Apr 14, 2014 at 5:36 PM by birman, version 1

Comments

No comments yet.