Channels in Go are like pipes. You can send things in one end and receive them at the other, but they connect different parts of your program running at the same time. The select statement is like a switchboard operator, letting your code wait on multiple pipes at once and react to the first one that’s ready. Together, they help you build programs that do many things simultaneously in a coordinated way. Let’s walk through some practical ways to use them.
Think of a Worker Pool like a team in a kitchen. You have a bunch of orders coming in (tasks) and a fixed number of chefs (workers). You don’t hire a new chef for every single order; that would be chaos. Instead, you have a queue of orders and your existing chefs pick them up as they finish their current one. In code, you create a channel for tasks and start a few goroutines that listen to that channel.
package main
import (
"fmt"
"sync"
"time"
)
type Task struct {
ID int
}
type Result struct {
TaskID int
Output string
}
func worker(id int, tasks <-chan Task, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
for task := range tasks {
// Simulate work
time.Sleep(100 * time.Millisecond)
results <- Result{
TaskID: task.ID,
Output: fmt.Sprintf("Worker %d processed task %d", id, task.ID),
}
}
}
func main() {
numWorkers := 3
numTasks := 10
tasks := make(chan Task, numTasks)
results := make(chan Result, numTasks)
var wg sync.WaitGroup
// Start the worker pool
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, tasks, results, &wg)
}
// Send tasks
go func() {
for i := 1; i <= numTasks; i++ {
tasks <- Task{ID: i}
}
close(tasks) // This signals to the workers that no more tasks are coming
}()
// Wait for workers to finish, then close results channel
go func() {
wg.Wait()
close(results)
}()
// Collect results
for result := range results {
fmt.Println(result.Output)
}
}
This approach controls how many goroutines are active at once, which is great for managing memory and not overwhelming systems like databases or APIs.
Sometimes you have one fast source of data and you need many goroutines to handle it. This is Fan-Out. Imagine a ticker tape machine spouting out stock prices, and you have ten analysts each reading the tape to make decisions. You create one channel that receives all the prices, and multiple worker goroutines read from that same channel. The channel distributes the messages to whichever worker is free.
The opposite is Fan-In. Now imagine each of those analysts shouts their conclusion. You need one person to listen to all of them and write a single summary report. Multiple producer goroutines send results into a single channel that one consumer reads from.
func fanIn(inputs ...<-chan string) <-chan string {
combined := make(chan string)
var wg sync.WaitGroup
// For each input channel, start a goroutine that forwards its messages
for _, input := range inputs {
wg.Add(1)
go func(ch <-chan string) {
defer wg.Done()
for msg := range ch {
combined <- msg
}
}(input)
}
// Close the combined channel once all forwarders are done
go func() {
wg.Wait()
close(combined)
}()
return combined
}
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
// Start two producers
go func() {
ch1 <- "report from sensor 1"
ch1 <- "another from sensor 1"
close(ch1)
}()
go func() {
ch2 <- "alert from sensor 2"
close(ch2)
}()
// Combine them into one stream
combinedCh := fanIn(ch1, ch2)
for msg := range combinedCh {
fmt.Println("Received:", msg)
}
}
This gives you a single, simple stream of data to process, even though it’s coming from many places.
A Request-Response pattern is like sending a letter with a return envelope included. You package your request with a dedicated channel where you expect the reply. This creates a clean, two-way conversation between goroutines.
type Request struct {
Data string
ReplyCh chan Response
}
type Response struct {
ProcessedData string
Err error
}
func processor(requests <-chan Request) {
for req := range requests {
go func(r Request) {
// Simulate processing
processed := fmt.Sprintf("Processed: %s", r.Data)
// Send the response back on the provided channel
r.ReplyCh <- Response{ProcessedData: processed}
}(req)
}
}
func main() {
reqChan := make(chan Request)
go processor(reqChan)
// Make a request
replyCh := make(chan Response)
reqChan <- Request{Data: "Hello", ReplyCh: replyCh}
// Wait for the response
resp := <-replyCh
fmt.Println(resp.ProcessedData) // Output: Processed: Hello
}
It keeps the communication neat; the processor doesn’t need to know who is asking, just where to send the answer.
In the real world, things fail. A service might hang. Timeout Protection is your safety net. You use select with time.After to give up if something takes too long. I use this constantly when fetching data from external APIs.
func fetchFromAPI(apiURL string, timeout time.Duration) (string, error) {
resultCh := make(chan string, 1)
errCh := make(chan error, 1)
go func() {
// Simulate a slow or failing network call
time.Sleep(2 * time.Second)
// In reality, you'd do http.Get here
resultCh <- "api data"
// errCh <- fmt.Errorf("network error")
}()
select {
case result := <-resultCh:
return result, nil
case err := <-errCh:
return "", err
case <-time.After(timeout):
return "", fmt.Errorf("request to %s timed out after %v", apiURL, timeout)
}
}
Without this, a single stuck goroutine could make your whole program freeze, waiting forever on a channel receive.
Related to timeouts is Cancellation Propagation. You often need to stop a whole tree of operations, not just one. The standard way is to use a context.Context, but the underlying idea often involves a done channel. When you close that channel, every goroutine listening for it gets notified and can shut down cleanly.
func longRunningTask(done <-chan struct{}, input int) <-chan int {
output := make(chan int)
go func() {
defer close(output)
select {
case <-time.After(time.Second * 5): // Simulating a long task
output <- input * 2
case <-done:
fmt.Println("Task cancelled")
return
}
}()
return output
}
func main() {
done := make(chan struct{})
resultCh := longRunningTask(done, 42)
// Decide to cancel after 1 second
go func() {
time.Sleep(1 * time.Second)
fmt.Println("Sending cancellation signal")
close(done)
}()
result, ok := <-resultCh
if ok {
fmt.Println("Got result:", result)
} else {
fmt.Println("No result, task was cancelled")
}
}
Closing the done channel broadcasts the cancellation signal to all goroutines that are selecting on it. It’s a clean and efficient way to manage the lifecycle of concurrent work.
Not all messages are equally important. Priority Selection lets you check a high-priority channel first. If there’s nothing there, you can fall back to checking regular channels. The trick is using a select with a default clause inside another select.
func priorityManager(urgent, regular <-chan string, done <-chan struct{}) {
for {
// First, check only the urgent channel.
select {
case msg := <-urgent:
fmt.Println("URGENT:", msg)
continue // Go back to the start, check for more urgent messages first.
default:
// No urgent message right now.
}
// Now check all channels, but the urgent one will still be checked here too.
select {
case msg := <-urgent:
fmt.Println("URGENT (in main select):", msg)
case msg := <-regular:
fmt.Println("Regular:", msg)
case <-done:
fmt.Println("Shutting down priority manager")
return
}
}
}
This ensures your system remains responsive to critical events even while handling a steady stream of normal work.
If you call an API too fast, you might get blocked. Rate Limiting controls how often you perform an action. You can use a ticker channel as a metronome, only allowing an action on each tick.
func rateLimitedCall(requests <-chan string, limitPerSecond int) {
// time.Tick returns a channel that delivers a value at the specified interval.
limiter := time.Tick(time.Second / time.Duration(limitPerSecond))
for req := range requests {
<-limiter // Wait for the next tick
go func(r string) {
fmt.Printf("Making call for: %s\n", r)
// Perform the API call here
}(req)
}
}
For a more sophisticated limiter that can handle bursts, you’d use a buffered channel as a token bucket, but the core idea is the same: use channel operations to gate your actions.
Sometimes you need to tell many goroutines about an event at the same time, like a configuration reload or a shutdown signal. A Broadcast Signal does this. The key is that closing a channel is a broadcastable event. Every goroutine waiting to receive from a closed channel gets a zero value immediately.
type Broadcaster struct {
mu sync.Mutex
listeners []chan struct{}
}
func (b *Broadcaster) Subscribe() <-chan struct{} {
b.mu.Lock()
defer b.mu.Unlock()
ch := make(chan struct{})
b.listeners = append(b.listeners, ch)
return ch
}
func (b *Broadcaster) Broadcast() {
b.mu.Lock()
defer b.mu.Unlock()
for _, listener := range b.listeners {
close(listener) // This notifies the listener
}
// Clear the old listeners; new subscribers will get a new channel
b.listeners = nil
}
func main() {
var broadcaster Broadcaster
// Goroutine 1 subscribes
go func() {
signal := broadcaster.Subscribe()
<-signal
fmt.Println("Goroutine 1 received broadcast")
}()
// Goroutine 2 subscribes
go func() {
signal := broadcaster.Subscribe()
<-signal
fmt.Println("Goroutine 2 received broadcast")
}()
time.Sleep(10 * time.Millisecond) // Let goroutines subscribe
fmt.Println("Broadcasting...")
broadcaster.Broadcast()
time.Sleep(10 * time.Millisecond) // Let goroutines print
}
This is simpler and often more efficient than sending individual messages to each listener.
Finally, you can connect channels into a Pipeline. Each stage is a function that takes an input channel, does some work, and returns an output channel. You chain these together to process data in steps. It makes your code very modular and easy to reason about.
// Stage 1: Generate numbers
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
// Stage 2: Square numbers
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
// Stage 3: Add a prefix string
func toString(in <-chan int) <-chan string {
out := make(chan string)
go func() {
for n := range in {
out <- fmt.Sprintf("Result: %d", n)
}
close(out)
}()
return out
}
func main() {
// Set up the pipeline: generate -> square -> toString
numbers := generate(2, 3, 4, 5)
squared := square(numbers)
results := toString(squared)
for msg := range results {
fmt.Println(msg)
}
// Output:
// Result: 4
// Result: 9
// Result: 16
// Result: 25
}
Each stage runs concurrently. As soon as the generate stage produces a number, the square stage can start working on it, and so on. This is a powerful model for stream processing.
These patterns are tools. You start with simple channel sends and receives, and then you combine them with select and other goroutines to build the coordination your program needs. The beauty is that the complexity is contained within clear communication patterns. You’re not managing mutexes and shared memory directly; you’re designing how the parts of your program talk to each other. It takes practice, but once these patterns click, writing clear, robust concurrent code in Go becomes much more intuitive. Start with a worker pool or a simple pipeline, and go from there.