Multiple goroutines listening on one channel

85,223

Solution 1

Yes, it's complicated, But there are a couple of rules of thumb that should make things feel much more straightforward.

  • prefer using formal arguments for the channels you pass to go-routines instead of accessing channels in global scope. You can get more compiler checking this way, and better modularity too.
  • avoid both reading and writing on the same channel in a particular go-routine (including the 'main' one). Otherwise, deadlock is a much greater risk.

Here's an alternative version of your program, applying these two guidelines. This case demonstrates many writers & one reader on a channel:

c := make(chan string)

for i := 1; i <= 5; i++ {
    go func(i int, co chan<- string) {
        for j := 1; j <= 5; j++ {
            co <- fmt.Sprintf("hi from %d.%d", i, j)
        }
    }(i, c)
}

for i := 1; i <= 25; i++ {
    fmt.Println(<-c)
}

http://play.golang.org/p/quQn7xePLw

It creates the five go-routines writing to a single channel, each one writing five times. The main go-routine reads all twenty five messages - you may notice that the order they appear in is often not sequential (i.e. the concurrency is evident).

This example demonstrates a feature of Go channels: it is possible to have multiple writers sharing one channel; Go will interleave the messages automatically.

The same applies for one writer and multiple readers on one channel, as seen in the second example here:

c := make(chan int)
var w sync.WaitGroup
w.Add(5)

for i := 1; i <= 5; i++ {
    go func(i int, ci <-chan int) {
        j := 1
        for v := range ci {
            time.Sleep(time.Millisecond)
            fmt.Printf("%d.%d got %d\n", i, j, v)
            j += 1
        }
        w.Done()
    }(i, c)
}

for i := 1; i <= 25; i++ {
    c <- i
}
close(c)
w.Wait()

This second example includes a wait imposed on the main goroutine, which would otherwise exit promptly and cause the other five goroutines to be terminated early (thanks to olov for this correction).

In both examples, no buffering was needed. It is generally a good principle to view buffering as a performance enhancer only. If your program does not deadlock without buffers, it won't deadlock with buffers either (but the converse is not always true). So, as another rule of thumb, start without buffering then add it later as needed.

Solution 2

Late reply, but I hope this helps others in the future like Long Polling, "Global" Button, Broadcast to everyone?

Effective Go explains the issue:

Receivers always block until there is data to receive.

That means that you cannot have more than 1 goroutine listening to 1 channel and expect ALL goroutines to receive the same value.

Run this Code Example.

package main

import "fmt"

func main() {
    c := make(chan int)

    for i := 1; i <= 5; i++ {
        go func(i int) {
        for v := range c {
                fmt.Printf("count %d from goroutine #%d\n", v, i)
            }
        }(i)
    }

    for i := 1; i <= 25; i++ {
        c<-i
    }

    close(c)
}

You will not see "count 1" more than once even though there are 5 goroutines listening to the channel. This is because when the first goroutine blocks the channel all other goroutines must wait in line. When the channel is unblocked, the count has already been received and removed from the channel so the next goroutine in line gets the next count value.

Solution 3

I've studied existing solutions and created simple broadcast library https://github.com/grafov/bcast.

    group := bcast.NewGroup() // you created the broadcast group
    go bcast.Broadcasting(0) // the group accepts messages and broadcast it to all members

    member := group.Join() // then you join member(s) from other goroutine(s)
    member.Send("test message") // or send messages of any type to the group 

    member1 := group.Join() // then you join member(s) from other goroutine(s)
    val := member1.Recv() // and for example listen for messages

Solution 4

It is complicated.

Also, see what happens with GOMAXPROCS = NumCPU+1. For example,

package main

import (
    "fmt"
    "runtime"
)

func main() {
    runtime.GOMAXPROCS(runtime.NumCPU() + 1)
    fmt.Print(runtime.GOMAXPROCS(0))
    c := make(chan string)
    for i := 0; i < 5; i++ {
        go func(i int) {
            msg := <-c
            c <- fmt.Sprintf("%s, hi from %d", msg, i)
        }(i)
    }
    c <- ", original"
    fmt.Println(<-c)
}

Output:

5, original, hi from 4

And, see what happens with buffered channels. For example,

package main

import "fmt"

func main() {
    c := make(chan string, 5+1)
    for i := 0; i < 5; i++ {
        go func(i int) {
            msg := <-c
            c <- fmt.Sprintf("%s, hi from %d", msg, i)
        }(i)
    }
    c <- "original"
    fmt.Println(<-c)
}

Output:

original

You should be able to explain these cases too.

Solution 5

For multiple goroutine listen on one channel, yes, it's possible. the key point is the message itself, you can define some message like that:

package main

import (
    "fmt"
    "sync"
)

type obj struct {
    msg string
    receiver int
}

func main() {
    ch := make(chan *obj) // both block or non-block are ok
    var wg sync.WaitGroup
    receiver := 25 // specify receiver count

    sender := func() {
        o := &obj {
            msg: "hello everyone!",
            receiver: receiver,
        }
        ch <- o
    }
    recv := func(idx int) {
        defer wg.Done()
        o := <-ch
        fmt.Printf("%d received at %d\n", idx, o.receiver)
        o.receiver--
        if o.receiver > 0 {
            ch <- o // forward to others
        } else {
            fmt.Printf("last receiver: %d\n", idx)
        }
    }

    go sender()
    for i:=0; i<reciever; i++ {
        wg.Add(1)
        go recv(i)
    }

    wg.Wait()
}

The output is random:

5 received at 25
24 received at 24
6 received at 23
7 received at 22
8 received at 21
9 received at 20
10 received at 19
11 received at 18
12 received at 17
13 received at 16
14 received at 15
15 received at 14
16 received at 13
17 received at 12
18 received at 11
19 received at 10
20 received at 9
21 received at 8
22 received at 7
23 received at 6
2 received at 5
0 received at 4
1 received at 3
3 received at 2
4 received at 1
last receiver 4
Share:
85,223
Ilia Choly
Author by

Ilia Choly

Favourite Languages: Go TypeScript Python C Lisp Interests: Networking Distributed/Parallel Computing Compilers Security Social Twitter Facebook Github

Updated on February 17, 2022

Comments

  • Ilia Choly
    Ilia Choly about 2 years

    I have multiple goroutines trying to receive on the same channel simultaneously. It seems like the last goroutine that starts receiving on the channel gets the value. Is this somewhere in the language spec or is it undefined behaviour?

    c := make(chan string)
    for i := 0; i < 5; i++ {
        go func(i int) {
            <-c
            c <- fmt.Sprintf("goroutine %d", i)
        }(i)
    }
    c <- "hi"
    fmt.Println(<-c)
    

    Output:

    goroutine 4
    

    Example On Playground

    EDIT:

    I just realized that it's more complicated than I thought. The message gets passed around all the goroutines.

    c := make(chan string)
    for i := 0; i < 5; i++ {
        go func(i int) {
            msg := <-c
            c <- fmt.Sprintf("%s, hi from %d", msg, i)
        }(i)
    }
    c <- "original"
    fmt.Println(<-c)
    

    Output:

    original, hi from 0, hi from 1, hi from 2, hi from 3, hi from 4
    

    NOTE: the above output is outdated in more recent versions of Go (see comments)

    Example On Playground

  • mlbright
    mlbright almost 11 years
    don't you need to wait for all goroutines to finish?
  • Rick-777
    Rick-777 almost 11 years
    It depends what you mean. Have a look at the play.golang.org examples; they have a main function that terminates once it reaches the end, regardless of what any other goroutines are doing. In the first example above, main is lock-step with the other goroutines so there's no problem. The second example also works without problem because all messages are sent via c before the close function is called and this happens before the main goroutine terminates. (You might argue that calling close is superfluous in the case, but it's good practice.)
  • olov
    olov about 10 years
    assuming that you want to (deterministically) see 15 printouts in the last example, you do need to wait. To demonstrate that, here's the same example but with a time.Sleep just before the Printf: play.golang.org/p/cEP-UBPLv6
  • olov
    olov about 10 years
    And here's the same example with a time.Sleep and fixed with a WaitGroup to wait for the goroutines: play.golang.org/p/ESq9he_WzS
  • user31208
    user31208 over 9 years
  • user
    user over 8 years
    Great lib you have there! I have found also github.com/asaskevich/EventBus
  • user
    user over 8 years
    And not a big deal, but perhaps you should mention how to unjoin in the readme.
  • user
    user over 8 years
    I don't think that's a good recommendation to omit buffering at first. Without buffering you actually don't write concurrent code, and that leads not only to that you can't deadlock, but also to that the handling result from the other side of the channel is already available on the next instruction after the sending, and you may unintentionally (or event intentionally in case of a novice) rely on that. And once you rely on the fact that you have a result immediately, without specially waiting for it, and you add a buffer, you have a race condition.
  • user
    user over 8 years
    And a race condition is harder to debug than a deadlock because Go compiler detects deadlocks and tries to help you with a list of stuck goroutines, but it can't do anything with race conditions. In other words, leaving out buffering at start doesn't lead to a more thread-safe application architecture.
  • Rick-777
    Rick-777 over 8 years
    No, this is a misunderstanding of the point. If you learn to think of goroutines in an abstract CSP way, you'd envisage something akin to a data-flow graph diagram. The nodes in the graph represent the concurrent goroutines. With this model, you would share nothing, so there would be no race conditions (and the race detector should confirm this). Then you'd be left with the issue of deadlocks to deal with. At this point, it is easier to minimise or eliminate buffering in the conceptual model until the deadlocks have been understood and designed out.
  • Rick-777
    Rick-777 over 8 years
    Then re-add buffering where it gives better performance overall.
  • Rick-777
    Rick-777 over 8 years
    In simple code like that shown above, note that unbuffered channels between goroutines force the goroutines to interleave. They are still running concurrently but not necessarily in parallel. (Concurrency is not parallelism, Rob Pike)
  • jhvaras
    jhvaras about 7 years
    Memory leak there
  • Alexander I.Grafov
    Alexander I.Grafov about 7 years
    :( Can you explain details @jhvaras?
  • ThePartyTurtle
    ThePartyTurtle over 4 years
    Ahh this was helpful. Would a good alternative be to create a channel for each Go routine that needs the info, then send message on all channels when necessary? That's the option I can imagine.
  • starriet
    starriet about 2 years
    1) "...goroutines must wait in line.": Is there any reference? I'm just curious if one goroutine is selected randomly or they wait in line (both situations sound similar but slightly different). 2) we should add sync.WaitGroup if we wanna see all 25 outputs. Without it, the main goroutine will finish earlier.
  • Admin
    Admin about 2 years
    yes, it is random. I dont understand all the fuzz on this question....
  • starriet
    starriet about 2 years
    @mh-cbon I posted this answer because the original question hasn't been answered yet (the question was: "Is this somewhere in the language spec or is it undefined behaviour?"). OP even misunderstood his/her code examples, but no one told about it.