Am I reading it correctly that `clear` does different things for maps and slices? Why doesn't it remove all the items from the slice like it does with the map, or set the values in the map to the zero value like it does for slices? That seems like an easy thing to get tripped up on
That _is_ removing all the items from it; my point is that if you pass a map with `n` entries to clear, you end up with a map with 0 entries. If you do the same with a slice with `n` elements, I'd imagine most people would expect to end up with a slice with 0 elements, but instead you have a slice with `n` copies of the zero value.
But it's not "removing items", at least not for all meanings of the word "removing". You can see this with something like:
s := []string{"hello", "world", "foo", "bar"}
fmt.Println(s) // [hello world foo bar]
s = s[:0]
fmt.Println(s) // []
s = append(s, "XXX")
s = s[:2]
fmt.Println(s) // [XXX world]
Which will print back "XXX world" because it's using the same array, and nothing was ever "deleted": only the slice's length was updated.
This is why "delete(slice, n)" doesn't work and it only operates on maps.
I suppose clear(slice) could allocate a new array, but that's not the same behaviour as clear(map) either, and doesn't really represent the common understanding of "clearing a slice". The only behaviour I can think of that vaguely matches what "clearing a slice" means is what it does now.
Okay, yeah, that definitely isn't what I expected. It's pretty wild to me that `s = s[:2]` will ever work fine if `len(s) == 1`; I would have assumed that it would always be the same regardless of how the slice was created. Playing around with it, it seems like this means that if you pass a subslice to a function, that function can get access to things from the entire slice, including the portions that weren't in the slice passed in[1]!
I think I understand now why `clear` can't work on slices the way I think it should, but only because slices themselves don't work the way I feel even stronger that they should.
Slices in Go are a tad counter-intuitive, I agree, but the approach does make sense I think. It allows you to use "dynamic sized arrays" for most cases like you would in Python and not worry too much about the mechanics, at the price of some reduced performance, but in cases where this kind of performance does matter it allows you to be precise about allocations and array sizes. So you kind of get the best of both.