Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No it doesn't. The proof for amortized O(1) only works if you double the array size on every expansion, which only happens until a "ceiling" gets hit and then the slice expands by less.

This results in O(n) complexity.

And this is assuming absense of memory pressure, if there is memory pressure it will very quickly becomes O(n^2) complexity, and god help you if it hits swap.



I had looked at append's behavior when I wrote that post, and for large slices it was increasing the size by 25% each time. That (or any proportion) gives you O(1) amortized time.


Really ? I thought it was limited to something like a fixed number of megabytes increase per call.


Appending to a slice I get these capacities:

  cap:  1
  cap:  2
  cap:  4
  ...
  cap:  512
  cap:  1024
  cap:  1312
  cap:  1696
  cap:  2208
  cap:  3072
  cap:  4096
  cap:  5120
  cap:  7168
  ...
  cap:  192797696
  cap:  240997376
So it looks like it starts by doubling, then it gets weird between 1024 and 4096, and then it multiplies by 1.25.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: