Is there a queue implementation?

94,742

Solution 1

Either vector or list should work, but vector is probably the way to go. I say this because vector will probably allocate less often than list and garbage collection (in the current Go implementation) is fairly expensive. In a small program it probably won't matter, though.

Solution 2

In fact, if what you want is a basic and easy to use fifo queue, slice provides all you need.

queue := make([]int, 0)
// Push to the queue
queue = append(queue, 1)
// Top (just get next element, don't remove it)
x = queue[0]
// Discard top element
queue = queue[1:]
// Is empty ?
if len(queue) == 0 {
    fmt.Println("Queue is empty !")
}

Of course, we suppose that we can trust the inner implementation of append and slicing so that it avoid useless resize and reallocation. For basic usage, this is perfectly sufficient.

Solution 3

Surprised to see no one has suggested buffered channels yet, for size bound FIFO Queue anyways.

//Or however many you might need + buffer.
c := make(chan int, 300)

//Push
c <- value

//Pop
x <- c

Solution 4

Most queue implementations are in one of three flavors: slice-based, linked list-based, and circular-buffer (ring-buffer) based.

  • Slice-based queues tend to waste memory because they do not reuse the memory previously occupied by removed items. Also, slice based queues tend to only be single-ended.
  • Linked list queues can be better about memory reuse, but are generally a little slower and use more memory overall because of the overhead of maintaining links. They can offer the ability to add and remove items from the middle of the queue without moving memory around, but if you are doing much of that a list is the wrong data structure.
  • Ring-buffer queues offer all the efficiency of slices, with the advantage of not wasting memory. Fewer allocations means better performance. They are just as efficient adding and removing items from either end so you naturally get a double-ended queue. So, as a general recommendation I would recommend a ring-buffer based queue implementation. This is what is discussed in the rest of this post.

The ring-buffer based queue reuses memory by wrapping its storage around: As the queue grows beyond one end of the underlying slice, it adds additional nodes to the other end of the slice. See deque diagram

Also, a bit of code to illustrate:

// PushBack appends an element to the back of the queue.  Implements FIFO when
// elements are removed with PopFront(), and LIFO when elements are removed
// with PopBack().
func (q *Deque) PushBack(elem interface{}) {
    q.growIfFull()
    q.buf[q.tail] = elem
    // Calculate new tail position.
    q.tail = q.next(q.tail)
    q.count++
}

// next returns the next buffer position wrapping around buffer.
func (q *Deque) next(i int) int {
    return (i + 1) & (len(q.buf) - 1) // bitwise modulus
}

This particular implementation always uses a buffer size that is a power of 2, and can therefore compute the bitwise modulus to be a little more efficient.

This means the slice needs to grow only when all its capacity is used up. With a resizing strategy that avoids growing and shrinking storage on the same boundary, this makes it very memory efficient.

Here is code that resizes the underlying slice buffer:

// resize resizes the deque to fit exactly twice its current contents. This is
// used to grow the queue when it is full, and also to shrink it when it is     
// only a quarter full.                                                         
func (q *Deque) resize() {
    newBuf := make([]interface{}, q.count<<1)
    if q.tail > q.head {
        copy(newBuf, q.buf[q.head:q.tail])
    } else {
        n := copy(newBuf, q.buf[q.head:])
        copy(newBuf[n:], q.buf[:q.tail])
    }
    q.head = 0
    q.tail = q.count
    q.buf = newBuf
}

Another thing to consider is if you want concurrency safety built into the implementation. You may want to avoid this so that you can do whatever works best for your concurrency strategy, and you certainly do not want it if your do not need it; same reason as not wanting a slice that has some built-in serialization.

There are a number of ring-buffer based queue implementations for Go if you do a search on godoc for deque. Choose one that best suits your tastes.

Solution 5

Edit, cleaner implementation of a Queue:

package main

import "fmt"

type Queue []interface{}

func (self *Queue) Push(x interface{}) {
    *self = append(*self, x)
}

func (self *Queue) Pop() interface{} {
    h := *self
    var el interface{}
    l := len(h)
    el, *self = h[0], h[1:l]
    // Or use this instead for a Stack
    // el, *self = h[l-1], h[0:l-1]
    return el
}

func NewQueue() *Queue {
    return &Queue{}
}


func main() {
  q := NewQueue()
  q.Push(1)
  q.Push(2)
  q.Push(3)
  q.Push("L")

  fmt.Println(q.Pop())
  fmt.Println(q.Pop())
  fmt.Println(q.Pop())
  fmt.Println(q.Pop())

}

Or just embed a "container/list" inside a simple implementation and expose the interface:

package queue

import "container/list"

// Queue is a queue
type Queue interface {
    Front() *list.Element
    Len() int
    Add(interface{})
    Remove()
}

type queueImpl struct {
    *list.List
}

func (q *queueImpl) Add(v interface{}) {
    q.PushBack(v)
}

func (q *queueImpl) Remove() {
    e := q.Front()
    q.List.Remove(e)
}

// New is a new instance of a Queue
func New() Queue {
    return &queueImpl{list.New()}
}
Share:
94,742

Related videos on Youtube

rev
Author by

rev

Updated on March 15, 2022

Comments

  • rev
    rev over 1 year

    Can anyone suggest Go container for simple and fast FIF/queue, Go has 3 different containers: heap, list and vector. Which one is more suitable to implement a queue?

  • peterSO
    peterSO about 9 years
    NOTE: Go 1 deletes the container/vector package outright. Code that uses container/vector should be updated to use slices directly. Go 1 Release Notes: Deleted packages. SliceTricks How to do vector-esque things with slices.
  • peterSO
    peterSO about 9 years
  • znkr
    znkr almost 9 years
    Problem with this is that the element will be copied quite often. That might not be an issue at all (copying is fast) but it's something to keep in mind.
  • Dave C
    Dave C about 8 years
    @Florian, this example code uses []int where copying is what you want. If instead it was type Foo struct {/*lots of fields*/} you would usually use []*Foo and have the queue be pointers and you wouldn't copy elements at all just the pointer. (Then you'd also want to do queue[0] = nil; queue = queue[1:] to discard the first element and remove the reference to it from the queue).
  • kostya
    kostya almost 8 years
    The problem with this implementation is that its memory usage is proportional to the number of dequeue calls.
  • GreySage
    GreySage over 7 years
    Perfect for a quick small queue and you just don't care about the memory concerns.
  • Rob
    Rob about 7 years
    For small queues that don't need persistence, this should be the default thing to do. Even if you are writing into an unbounded queue on disk, writing and reading from a channel is often a good way to do it.
  • Eric
    Eric over 6 years
    Indeed, this seems by far the easiest and most logical to me.
  • Nathan Tuggy
    Nathan Tuggy over 6 years
    This would be a rather better answer if it excerpted some of the more relevant code from your repo.
  • Peter Froehlich
    Peter Froehlich over 6 years
    I assumed that whoever wanted to see the code would just click the link. Sorry, I am totally new here. I'll update the answer with some snippets.
  • Nathan Tuggy
    Nathan Tuggy over 6 years
    Don't get me wrong, it's not a bad answer as it is, and certainly isn't going to be deleted as some superficially similar "link-only" answers are, but it could be a bit better than it is with more of the same: explanations of actual code, which are the most important thing for an SO answer.
  • Peter Froehlich
    Peter Froehlich over 6 years
    The funny thing is that posting this answer made me revise the code and now my queue is actually faster than a channel. More meat for a revised answer, coming soon.
  • wingerse
    wingerse over 6 years
    aand where's dequeue?
  • rai.skumar
    rai.skumar over 6 years
    Left out implementation intentionally (use enqueue method to understand how dequeue will get implemented) :)
  • wingerse
    wingerse over 6 years
    you mean q.values = q.values[i:] ? That's gonna waste memory.
  • wingerse
    wingerse over 6 years
    It wasn't from me.
  • Bora M. Alper
    Bora M. Alper over 6 years
    Although being unsure, I am afraid that this might cause data-loss as well if the operation is not carefully synchronised / protected using mutexes or any other measures, unlike using channels as FIFO queues. Imagine a situation where a function, where the queue is being modified, is called by two different goroutines. Since the append-AND-assign operation is not atomic, you might loose data.
  • Sir
    Sir almost 6 years
    That gist is so poorly designed though =/
  • anaken78
    anaken78 almost 6 years
    Isn't x = <- c a blocking call? If c is empty than your go routing could hang till the queue is replenished. That's not the behavior that you want of a simple queue data structure?
  • mcandre
    mcandre over 5 years
    Slices suck at deleting elements. Slice is NOT a replacement for mutable vectors.
  • mcandre
    mcandre over 5 years
    For my purposes, popping occurs only once max capacity would be reached. Unfortunately, this behavior is hard to implement with a typical channel read/write pattern. I'm going to resort to dedicated queue structures for this problem.
  • tul
    tul over 5 years
    This is not a good approach, as every time queue[1:] is done, it moves the start of the slice to point to the next element, but does not release the spaced used by the dequeued element. Effectively it has unbounded memory growth in the slice storage. Additionally, if the queued element is a pointer, or a struct containing pointers, then the underlying slice storage will retain the memory to the dequeued element, causing a further memory leak.
  • Kostas
    Kostas over 5 years
    @anaken78: Nothing that a select/default clause can't fix, right? gobyexample.com/non-blocking-channel-operations
  • Marwan Burelle
    Marwan Burelle about 5 years
    I don't know the full underlying implementation of slices in Go. As I said after the code, I suppose that the implementation is not too basic. For production code, with a persistent queue, I'll probably do it differently ...
  • Igor Mikushkin
    Igor Mikushkin almost 5 years
    I've just tested it with Go 1.11.2 with pointers to a struct. No memory leaks found. Looks like a simple and good way. No nil setting needed as well. play.golang.org/p/QAujWCS20xc
  • Nuno Cruces
    Nuno Cruces almost 5 years
    Comments by @kostya and @tul are inaccurate. append will create a new backing array whenever there is not enough capacity to hold new elements. So, as long as you throw out the old slice queue=queue[1:], memory usage is not unbounded. It still might take awhile to reclaim that memory if the slice is large.
  • Josh Hibschman
    Josh Hibschman about 4 years
    Agreed @NunoCruces. The first element will be garbage collected when enough new elements are added to the slice to cause reallocation --then removed elements can be discarded. go.googlesource.com/go/+/master/src/runtime/slice.go#76
  • Nuno Cruces
    Nuno Cruces about 4 years
    Right @y3sh. And if you're not sharing those queues, you can nil/zero the first element (as dave-c said) to claim elements sooner. This makes it a reasonable implementation for small slices.
  • Jin Lim
    Jin Lim over 3 years
    can I make 'channel' without size?
  • Nimrod Yonatan Ben-Nes
    Nimrod Yonatan Ben-Nes about 3 years
    @JinLim You can, just omit the second value passed to the make function, but than of course it won't be a buffered channel and thus you won't be able to use it as a queue.
  • bryanmac
    bryanmac over 2 years
    This is the best queue impl by wrapping around with mod. the only thing I would add is if "full" you can double the capacity and copy over by unwrapping the circular buffer. There's even an optimization to do it in two "mem copies".
  • zed
    zed over 2 years
    you should check for length 0 and pop nil... otherwise it will panic. Also, this is not thread safe.
  • David Rz Ayala
    David Rz Ayala over 2 years
    Good catch, checking for length 0 and pop nil is missing. But it not being thread safe is expected, ArrayList for Java or List for c# are not thread safe. Basic collections are not supposed to be thread safe as that hits performance.