Skip to content

Latest commit

 

History

History
123 lines (91 loc) · 3.84 KB

mutex.md

File metadata and controls

123 lines (91 loc) · 3.84 KB

Mutex -> sync.Mutex

Mutex is short for mutual exclusion. Mutexes keep track of which thread has access to a variable at any given time. Mutex

Read & Write Lock

  • Read lock: multiple concurrent read operations can be performed at the same time, and write operations are not allowed

  • Write lock: only one coroutine is allowed to write at the same time, and other write and read operations are not allowed

  • Read only mode: multi process can be read but not written

  • Write only mode: single co process can be written but not readable

	wg := &sync.WaitGroup{}
	mut := &sync.Mutex{}
  
	var result []float64 //shared resource

	wg.Add(1)
	go func() {
		mut.Lock()
		fmt.Println("worker 1")
		result = append(result, 50.50)
		mut.Unlock()
		wg.Done()
	}()

	wg.Add(1)
	go func() {
		mut.Lock()
		fmt.Println("worker 2")
		result = append(result, 78.50)
		mut.Unlock()
		wg.Done()
	}()

	wg.Wait()
	fmt.Println(result)

Semaphore

The semaphore is the concept that allows In-N-Out to receive 4 orders in a restaurent concurrently (actually in parallel), causing everyone else to sit and wait.

Let's compare a mutex to a sempahore

  • If a mutex is concerned with ensuring a single thread ever accesses code exclusively
  • a semaphore is concerned with ensuring at most N threads can ever access code exclusively.

a semaphore is a more generalized version of a mutex

What's the point of giving exclusive access to N threads?

The point is that in this scenario you are purposefully constraining access to a resource therefore protecting that resource from overuse.

mutex: constrains access to a 1 single thread, to guard a critical section of code.

semaphore: constrains access to at most N threads, to control/limit concurrent access to a shared resource

type Post struct {
	UserID int64  `json:"userId"`
	ID     int64  `json:"id"`
	Title  string `json:"title"`
	Body   string `json:"body"`
}

func semaphoreExample() {

	var p Post
	wg := &sync.WaitGroup{}
	wg.Add(100)

	//sem := make(chan bool, 10)
	sem := make(chan struct{}, 10)
	mut := &sync.Mutex{} //overcome race condition

	for i := 1; i <= 100; i++ {

		fmt.Println(runtime.NumGoroutine())
		//sem <- true
		sem <- struct{}{} //an array of empty structs, which occupies no storage.

		go func(i int) {

			defer wg.Done()
			defer func() { <-sem }()

			res, err := http.Get(fmt.Sprintf("https://jsonplaceholder.typicode.com/posts/%d", i))
			if err != nil {
				log.Fatal(err)
			}

			mut.Lock()
			err = json.NewDecoder(res.Body).Decode(&p) //race
			if err != nil {
				//return err
				log.Fatal(err)
			}
			fmt.Println(p.ID, p.Title)
			mut.Unlock()

		}(i)

	}

	wg.Wait()
}

GODEBUG=gctrace=1 go run main.go

time go run --race main.go

When using a semaphore how do you figure out the value of N for how many threads to limit?

Unfortunately there is no hard and fast rule and the final number of N is going to depend on many factors. A good place to start is by benchmarking and actually hitting your shared resource to see where it starts to fall over in terms of performance and latency.

Resource