No it doesn't. The proof for amortized O(1) only works if you double the array size on every expansion, which only happens until a "ceiling" gets hit and then the slice expands by less.
This results in O(n) complexity.
And this is assuming absense of memory pressure, if there is memory pressure it will very quickly becomes O(n^2) complexity, and god help you if it hits swap.
I had looked at append's behavior when I wrote that post, and for large slices it was increasing the size by 25% each time. That (or any proportion) gives you O(1) amortized time.
This results in O(n) complexity.
And this is assuming absense of memory pressure, if there is memory pressure it will very quickly becomes O(n^2) complexity, and god help you if it hits swap.