Here is my version:
```nim
# Compile and run with 'nim c -r -d:useRealtimeGC -d:release main.nim'
import strutils
import times
const
windowSize = 200000
msgCount = 1000000
type
Msg = seq[byte]
Buffer = seq[Msg]
var worst: float
proc mkMessage(n: int): Msg =
result = newSeq[byte](1024)
for i in 0 .. <result.len:
result[i] = byte(n)
proc pushMsg(b: var Buffer, highID: int) =
let start = epochTime()
let m = mkMessage(highID)
b[highID mod windowSize] = m
GC_step(10_000) # 10ms
let elapsed = epochTime() - start
if elapsed > worst:
worst = elapsed
proc main() =
GC_disable()
var b = newSeq[Msg](windowSize)
for i in 0 .. <msgCount:
pushMsg(b, i)
let worst_ms = formatFloat(worst * 1000, format=ffDecimal, precision=2)
echo("Worst push time: ", worst_ms, "ms")
echo(GC_getStatistics())
when isMainModule:
main()
```
Things i've changed:
* Removed extraneous spaces after ``(`` and before ``)``.
* Rewrote code to make it more idiomatic.
* Using ``seq[byte]`` instead of ``ref array[1024, byte]`` (this is not
idiomatic at all)
* Switched from ``cpuTime`` to ``epochTime``, the former only measures CPU
time(!).
* Added some real-time GC API calls.
Strangely, switching to ``seq`` causes an overall slowdown (the max push time
is unaffected, but the application takes longer to complete).
On my machine the typical worst push time is 10ms. Go 1.5 (maybe I should
upgrade) gets 8ms. So Nim is 2ms off, which isn't bad. As pointed out by
/u/matthieum on Reddit, the power of Nim is the fact that you can specify
exactly where the GC should run (as I've done using the ``GC_step`` call).
BTW, ``stack`` isn't a GC.