This is at least the biggest release since 1.18 with generics, possibly bigger. I’m excited because the changes demonstrate a transition from the traditional go philosophy of almost fanatical minimalism, to a more utilitarian approach.
Loop variable capture is a foot-gun that in the last six years has cost me about 10-20 hours of my life. So happy to see that go. (Next on my list of foot-guns would be the default infinite network timeouts — in other words, your code works perfectly for 1-N months and then suddenly breaks in production. I always set timeouts now; there’s basically no downside)
Interesting to see them changing course on some fundamental decisions made very early on. The slices *Func() functions use cmp() int instead of less() bool, which is a huge win in my book. Less was the Elegant yet bizarre choice — it often needs to be called twice, and isn’t as composable as cmp.
The slog package is much closer to the ecosystem consensus for logging. It’s very close to Uber’s zap, which we’re using now. The original log package was so minimal as to be basically useless. I wonder why they’re adding this now.
I’ve already written most of what’s in the slices and maps packages, but it’ll be nice to have blessed versions of those that have gone through much more API design rigor. I’ll be able to delete several hundred lines across our codebase.
What’s next? An http server that doesn’t force you to write huge amounts of boilerplate? Syntactic sugar for if err != nil? A blessed version of testify/assert? Maybe not, but I’m happy about these new additions.
> I always set timeouts now; there’s basically no downside
Beware of a naive http.Client{Timeout: ...} when downloading large payloads. I've always set http.Client.Timeout since day one with Go due to prior experience, but was bitten once when writing an updater downloading large binaries, since the Timeout is for the entire request start to finish. In those scenarios what you actually want is a connect timeout, TLS handshake timeout, read timeout, etc.
https://blog.cloudflare.com/the-complete-guide-to-golang-net... does a good job explaining how to set proper timeouts, except there's a small problem: it constructs an http.Transport from scratch; you should probably clone http.DefaultTransport and modify the dialer and various timeouts from there instead.
In general, setting timeouts beyond the entire request timeout is pretty involved and not very well documented. Wish that can be improved.
> An http server that doesn’t force you to write huge amounts of boilerplate?
I just started my first Go tutorials this week. One of them was go.dev's Writing Web Applications [0]. I was actually struck by the lack of boilerplate (compared to frameworks I've used in Java/Python/etc.) involved.
I get that it's a toy example, but do you know of any better write-ups on what a production Go web server in industry looks like?
I don't think there necessarily is a default production webserver setup. People use different routers or frameworks, or go bare bones because they can.
You asked for an example, and here is one. This is my side project "ntfy", which runs a web app and API and handles hundreds of thousands of requests a day and thousands of constantly active socket connections. It uses no router framework, and has a modified (enhanced version of the http.HandlerFunc) that can return errors. It also implements a errHTTP error type that allows handler functions to return specific http error codes with log context and error message.
It is far from the most elegant, but to me Go is not about elegance, it's about getting things done.
> This is at least the biggest release since 1.18 with generics, possibly bigger.
I sort of see what you're saying, but then again, the addition of a couple of small generic packages (slices, map, cmp) and one larger package (log/slog) isn't exactly a huge amount of new surface area. Definitely not as big a qualitative change as generics themselves, which added I think it was about 30% more content to the Go spec.
> The slog package ... I wonder why they’re adding this now.
Because it's very useful to a ton of people, especially in the server/service world where Go is heavily used. To avoid a 3rd party dependency. To provide a common structured logging "backend" interface. See more at https://go.googlesource.com/proposal/+/master/design/56345-s...
I agree we can be enthusiastic, but the Go team is still spending a lot of time getting APIs right, finding solutions that fit well together, and so on. I don't think it's the downward spiral of "let's pull in everything" we've seen in P̶y̶t̶h̶o̶n̶ some other languages.
The popular Python “requests” HTTP library doesn’t have a default timeout. There’s a 2015 GitHub issue asking for a default timeout, if even an opt-in environment variable to avoid breaking API compatibility. There’s a lot of comments on the issue, but no commitment to implement or close as “won’t fix”.
As far as I can tell, the network timeout (specifically SetDeadline, SetReadDeadline and SetWriteDeadline) is handled by Go runtime not by the OS, given how complex real world systems are, I wouldn't hold my breath on that one.
It is interesting to see them add things like the "clear" function for maps and slices after suggesting to simply loop and delete each key one at a time for so long. Is this a result of the generics work that makes implementation easier vs. the extra work of making a new "magic" function (like "make", etc.)?
That `clear` on a slice sets all values to their type's zero value is going to be extremely confusing especially coming from other languages (Rust, C#, C++, Java, ...) where the same-named function is used on list-ish types to set their length to zero.
Doubly-so when `clear` on a map actually seems to follow the convention of removing all contained elements.
Sure, although as a Go user, the behavior described is exactly what I’d expect. These new functions are no different from functions that you could write yourself.
I guess, but that seems expected to me at this point, and consistent within the semantics of how slices and maps work (and other values).
Maps are kind of like
type map *struct{ len int; ... }
Slices are kind of like
type slice struct{ len int; ... }
We get a lot of convenience by having the pointers auto-dereferenced, but the cost is that the semantics are still different and there are no syntactic markers to remind us of the fact.
I don't think any language has really given us something that is completely intuitive here. Python's semantics with the list type are a constant surprise to newcomers. C++’s semantics surprise newcomers. Rust's semantics surprise newcomers. Surprises all around. The best you can hope for is something that is internally consistent.
The slice in Go is more or less equivalent to &[] in Rust or std::span in C++. The whole idea of passing a pointer by value is key to understanding the semantics of most modern programming languages. Like, is Java pass-by-value or pass-by-reference? You can argue the point, but whatever label you decide is appropriate for Java, it’s useful to think of Java as passing pointers by value. Same with Python, Rust, Go, etc. This is not intuitive for people who are new to programming.
> The slice in Go is more or less equivalent to &[] in Rust or std::span in C++.
Not really, because they are mutable, they can mutate the underlying memory, and they can re-allocate. They are a weird mix of &mut []/Vec or std::{span,vector}.
In contrast, a Rust &[] can may the underlying storage (if it's an &mut []), but cannot spin out a new storage on its own and start a new life without a backing structure – and I'm not utterly familiar with std::span, but I would wage the semantics are close.
Go slices can, which is why they are always tricky, especially for beginners. Not only does = not really do what is intuitively expected, not only every beginner will be bitten in the ass by forgetting the `x =` in `x = append(x, y)`, but it is impossible, when calling a function expecting a slice, to know if this function only wants a view on some memory or actually expect to modify it; a capital difference that is very clear in Rust or C++ type systems.
To be honest I kinda hate the reply to “X is like Y” comment when someone says “X is not like Y because of difference Z”. It's just… so pedantic. The whole reason we say “X is like Y” instead of “X is the same as Y” is because X is not the same as Y. I’m just really tired of seeing this response on HN over and over. I was pretty damn explicit when I said “more or less” and you’re here to argue about whether it is legal for me to say “more or less” in this context. I mean, geez, what a drag.
If you talk about how Go slices are tricky for beginners, but you cite C++ as some kind of gold standard against which Go should be compared, then I think you’ve lost the plot—C++’s type system is a complete and utter trash fire for people who are new to programming. Rust, as well, is very difficult for people to get into. Even the Python semantics for lists get people tripped up all the time.
a = [[]] * 5
b = [[] for _ in range(5)]
I bring this up because there is no language that gets things right for beginners and still provides the tools which professional programmers expect to have. And if you want to pick an example of a language that is particularly bad for beginners, C++ is it. C++ is shit for beginners. Complete shit. I bring up the Python example because it’s something I’m always explaining to people who are learning Python—Python is ok, but slicing in Python creates new arrays containing a copy of the slice's contents.
The nuances of how references and values work is something that you have to work through, and then you have to come to terms with the conventions for the particular language you are using. IMO, Go’s slices are fine… you really just have to be careful about aliasing a slice you don’t own, but then again, that’s true for languages like C++, Python, Java, and C# as well. Rust is the only one that’s really different here.
> The whole reason we say “X is like Y” instead of “X is the same as Y” is because X is not the same as Y
Being able to change the underlying data is a pretty big difference. Technically, their only solid common point is that they address contiguous spaces in memory.
> you cite C++ as some kind of gold standard
I never did; I highlighted the difference between immutable views vs. whatever Go slices are.
In this scenario, we are comparing two cars, a bicycle, a jet ski, and three types of airplane. Yes, the cars are similar, within that context. Many languages, like Python and Java, do not have an array slice type. And the similarities between C++, Rust, and Go are relevant—the length is a property of the slice itself, and since the slice is passed by value, it is not modified by a function that accepts a slice as an argument, even if the objects the slice point to are modified by that function.
If you see a different context, then you misinterpreted what I wrote.
It is easy—trivial, even—to imagine scenarios where a particular “X is like Y” does not make sense. What you should do, as a reader, is try and understand what the writer means, rather than try to figure out some way to interpret a comment so that it is wrong, in your view.
The easy way out—saying “X is not like Y because of difference Z”—does not meaningfully contribute to the discussion.
> The slice in Go is more or less equivalent to &[] in Rust or std::span in C++.
My understanding is, to use the Rust/C++ term, slices in Go are owned, but they are not in Rust or C++. That is, they're a pointer + length in the latter two, but a pointer, length, and capacity in Go.
Types in C++ don’t carry ownership information inherently either, but they’re still thought of in these terms. I know Go doesn’t often use these terms, which is why I clarified.
I think the distinction is useful specifically because it explains why Go slices work differently than in at least those two languages.
I have a particular axe to grind when it comes to the word “ownership” of objects in programming. In C++ and Rust there is a very natural sense of ownership in that the owner of an object is who may deallocate the object, and that ownership may be shared with std::shared_ptr<T> in C++ or Rc<T> / Arc<T> in Rust. Ownership is such a useful concept in these languages because it is generally true that somebody must deallocate the object, and it must happen safely.
As a very natural consequence, people who spend long hours working in C++, Rust, C, or other similar languages start to associate, very closely, the notions of ownership and correctness. And indeed, ownership is broadly useful outside C++, Rust, and C. Even in a garbage-collected language like Java or Go, it is generally useful to have clear ownership. You don't modify objects that you don't own, or use objects outside their scope.
But occasionally, you come across a piece of code where ownership gets in the way. Perhaps some garbage-collected algorithm that transforms data with pointers going all over the place. It probably sounds like a mess, but that is not necessarily true either—it can be perfectly good, correct, readable code.
So while ownership is a useful concept for talking about specific pieces of Go code, or specific pieces of Java code, it is not applicable to all Go or Java code, and that’s fine. It’s kind of like talking about code in terms of functions—nearly every language on the planet makes heavy use of functions (or some equivalent), but it’s also true that code does not have to be organized in functions, and you will occasionally see code that does not use functions.
Every language has sharp edges, but go's whole MO is to avoid rabid footguns at the expense of verbosity (IMO). The for-shadow issue thats fixed this release is a great example of go deciding to do the intuitive thing rather than the "correct" thing because that's how people work.
I don't think the implementation details matter to a user of a map or a slice (or an array for that matter) - they're language builtins (as opposed to span, vector and map in c++ which are library types).
In my experience, go has tons of footguns that come because of the verbosity. Rather than having clear abstractions that handle edge cases for you, you get to reimplement these things yourself every single time.
Case in point, clear. Or "typed nils". Or accidentally swallowing errors because you had to handle them manually. Or reimplementing higher-level job control on top of channels every single time.
Maybe generics have fixed this, I threw in the towel on golang before they released them.
But as an example, if you wanted to have any sort of higher-level management of goroutines (for example, a bounded number of background workers) you get to rewrite or copy-paste that code every place you want to accomplish that. A library couldn't exist to abstract away the idea of a pool of background workers because it can't know in advance what types you want to send over your channels.
Again, I wouldn't be surprised if post-generics there's a library now to do this for you. But for years if you wanted anything higher level than raw channels, you're basically on your own.
Go slices are passed by value so there's no way for clear() to resize the underlying array without reassignment.
I suppose it could have been x = clear(x) or clear(&x), but certainly if you understand Go semantics then seeing any function call do Foo(slice) already signals that the call can't modify the length since there's no return value.
This is a great example of why I dislike Go. It is not obvious that a slice is passed by value while a map is not or why. Therefore every action on it feels a bit weird because of that, and now you have functions like "clear" that take a very non-obvious action. Personally, I'd rather have pass-by-value return an error and only allow pass-by-reference (better: they should have had maps and slices be pointers). I'm not sure I'd ever use a function that set every value to its zero type.
I agree the semantics seem weird, I've occasionally wanted the equivalent of x = clear(x) but I can't think of a time when I've wanted to set all the values to the zero value.
Which boils down to "doing what clear(slice) does cannot be implemented efficiently today" but I'm not sure how having an efficient way to do something folks don't want is useful?
That's actually a great explanation of why it's not easy to implement the clear function the way it makes sense for slices. However, this is a built-in, not a normal function, so they could make it do whatever they like, including doing the intuitive and desired thing, no? It seems to me that they've just created another "loop variable gotcha" type situation...
> It is interesting to see them add things like the "clear" function for maps and slices after suggesting to simply loop and delete each key one at a time for so long.
Slowly walking back dogmatic positions is just how the Go team works.
I say this as a person that wrote Go full time for a handful of years.
In my experience, that's exactly how this plays out every single time.
Dev: Can we have a function to clear a map?
Go: No, it's easy enough to write the 5 lines of code to just do it yourself every time.
Dev: Okay, I don't see why I should have to write those 5 lines every time but fine. Isn't looping over everything going to be slower than just… having a function that can empty the internals?
Go: We've implemented a compiler optimization to detect this and rewrite it to the faster code it would have been if it we were to implement it.
Dev: Isn't that… way harder than just writing the method? Anyway, I noticed this solution doesn't actually always work because of this edge case.
Go: Just handle the edge case every time then.
Dev: That's the point. I can't.
Can you link to any unit of work that solves this edge case? Because all I can find are bugs and issues created 2015/2016/2017 that were closed and unresolved.
This isn't remotely close to the first or even the tenth time I've seen this exact pattern play out. Finally there's some straw that forces the golang team to backpedal on a dogmatic position, but along the way there's dozens of comical defenses of the current state of things.
I would like to see some content from the go team on generics or clear that fits your claim. Yes, there are many in the community who speak the way you suggest, but you don’t seem to know much about the Go teams pov.
From what I can tell the issue here is that Rob Pike thinks the label "generics" is inaccurate? Seems like a far cry from what you have accused them of. I think theres not only a lack of evidence to convince me, I think your claim is just straight up unsubstantiated. I think an unbiased, responsible observer would have to conclude similarly to me.
I used to really like Go. Now that I don't work with it, I find that the further I go on without it, and with using other tools, the less and less I'd want to go back.
I would argue Go's inability to manage NaN keys is irrelevant to the desire for "clear", in that I would argue that the NaN keys issue should be fixed _regardless_ of clear.
Even with the NaN, the NaN wasn't equal to itself, so it still wouldn't delete. Really, they just should have forbidden float64 key'd maps, but too late for that, I guess.
Clearing a container is usually a much simpler and faster operation than looping through all and removing them individually. That's not a question of tidying something up.
There were compiler optimizations for clearing by iterating. I haven’t looked at the code, but I suspect this won’t be much more efficient than iterating was with the optimizations.
Huh, I'm glad to see generic Min/Max functions, but the fact that they're built-ins is a little odd to me. I would have expected them to put a generic math library into the stdlib instead. The fact the stdlib math package only works with float64s has always struck me as a poorly thought out decision.
With ordinary functions, the arguments are assigned types too soon, and you get integer types for 0 and 1 in the above code. In C++ you might make the types explicit:
I can't think of a use case for that. If all the inputs are consts, then you know the values and can just assign it to be the less of a or b. Am I missing something here?
While I suspect open coding may make the optimization a little easier, there's no reason it couldn't optimize out a slice and fixed 2-3 iteration loop with the same result.
The proposal's real conclusion was "the decision cannot be resolved by empirical data or technical arguments."
Well, the obvious ones are of course Min and Max functions, which is resolved with this. Other ones I commonly find myself wanting to use with integers would be math.Abs and math.Pow I guess. Otherwise they are mostly functions useful with floats, so ultimately I understand the logic, though even in that case, it would be nice if they were usable with float32s as well without casting back and forth.
Personally I try to avoid using floats for calculations if I can (unless it's obviously warranted), I've encountered far too many foot guns from using them, though honestly the same can be said about integers in some situations too. I wish there was a package like math/big that was more accessible, I find the current interface for it pretty abysmal.
I'm a bit surprised that the slog package was added to the stdlib, but it does seem to use the API that I think is the most ergonomic across libraries I saw in Go (specifically, varargs for key values, and the ability to create subloggers using .With), so I guess it's nice most of the community will standardize around it.
If all goes well, you won't have different libraries using different loggers anymore, in some not too distant future, which should improve easy composability.
I literally just updated all of my golang logging to use zerolog so i could get severity levels in my logs. Bad timing on my part! I guess ill re-do it all with slog, i prefer stdlib packages to third party packages.
https://pkg.go.dev/golang.org/x/exp/slog#hdr-Levels seems to fall into the same trap that drives me _starkraving_ about the 18,000 different(!) golang logging packages: there doesn't seem to be a sane way of influencing the log level at runtime if the author doesn't have the foresight/compassion to add a command-line flag or env-var to influence it. A similar complaint about the "haha I log json except this other dep that I import who logs using uber/zap so pffft if you try to parse stdout as json"
That bugs me too. I consider it a red flag for a library to log to anything except a `log.Logger` passed in from the caller. Now I'll expand that to include a `slog.Logger` as well. If the library is logging directly to stderr or stdout, that is a sign that it probably has other design issues as well.
huh? there's no dogma involved here, it's just an observation of the properties of the type
a context is created with each request, and destroyed at the end of it
and values stored in a context are accessible only through un-typed, runtime-fallible methods -- not something you want to lean on, if you can avoid it
In practical terms there's pros & cons I guess, but in general doesn't loading a Context with session variables make code more concise and easier to understand ? DB connections, loggers, and the like. If you really want to pass around Context's in all your API signatures then at least try to make the most of it.
Fer sher. But a passed-by-Context logger could be used (for example) to override a library package's default (stdlib?) logger.
But what is the SOP / Best Practice here ? Do many libraries have some sort of SetLogger(..) initialization call, so that loggers don't clutter the API ? Or are error returns info-(over-)loaded ?
Nice, my push for actually using the sha256 instructions on amd64 finally got released. 3x-4x increase in hash speed on most x86 which is really nice for content addressable storage use cases like handling container images.
Huh, that is interesting how they do that. They are enabling SHA instruction support based on CPUID and without respect to the value of GOAMD64. I did not realize Go was doing that.
Yup, that's standard, including in other ecosystems. It's what I do in ripgrep for example when your target is just standard `x86_64` (v1). GNU libc does it as well. And I believe Go has been doing it for quite some time. (The specific one I'm aware of is bytes.Index.)
This was especially important back before the days of v1/v2/v3/etc of x86_64, since Linux distros distributed binaries compiled for the lowest common denominator. So the only way you got fast SIMD instructions (beyond SSE2) was with a CPUID check and some compiler features that let you build target specific functions. (And I'm not sure what the status is of Linux distros shipping v1/v2/v3/etc binaries.)
Env vars make it easier to automate in CI. The actual script to build for each os/arch is the same but only the vars change. It's convenient. You can always prefix the command with the env vars on the same line if you want a one-liner.
It could make it easier for build systems to be multi platform. You don’t have to keep track of custom args and add them to every call, you can just set the environment once.
Worth noting that the release announcement was written by Eli Bendersky, of https://eli.thegreenplace.net/ fame. It's a fantastic technical blog with literally decades of content.
Overall, a release more for engineering than language. Even the new API's are mainly optimizations, and optimizations are netting ~10% (pretty good for an mature toolset).
The WASI preview shows Google is committing engineering resources to WASM, which could grow the community a touch.
A good first step for better WASM support, however it's currently incompatible with tinygo's WASM target.
For example, I'm working on a custom WASM host (non-browser) and have a tinygo WASM package with import bindings like this:
//go:wasm-module rex
//export wait_for_event
func wait_for_event(timeout_usec uint32, o_event *uint32) bool
Both these comment directives are tinygo-specific of course, and now Go has added its own third and different directive of course.
When I add Go's desired `//go:wasmimport rex wait_for_event` directive, it complains about the types `*uint32` and `bool` being unsupported. Tinygo supports these types just fine and does what is expected (converting the types to uint32). On the surface, I understand why Go complains about it, but it's such a trivial conversion to have the compiler convert them to `uint32` values without requiring the developer to use unsafe pointer conversion and other tricks.
Hopefully I can find a way to keep both tinyo and Go 1.21rc2 happy with the same codebase going forward and be able to switch between them to evaluate their different strengths and weaknesses.
The type conversion will improve in new releases. FYI recent TinyGo releases supports go:wasmimport too. The desire is definitely to allow users to use either or at least easily migrate. Thank you for trying it out!
There’s been an emphasis in slog on Handler composition over directly implementing a ton of features. Personally I love it - there are things I’ve needed, that slog can do, that few other loggers make easy/possible.
Zerolog will still be relevant for raw performance (slog is close to zap on perf - doesn’t win benchmarks, doesn’t look out of place either), fewer really need it but some really do.
I've been using it for a few weeks now. Overall pretty happy with it. Has good default API, and can be basically arbitrarily extended as needed. We even have a custom handler implementation that allows us to assert on specific logs being emitted in our stress/fuzz testing.
Seems like a really substantial release to me. The new built in functions min, max, and clear are a bit surprising, even having followed the discussions around them. The perf improvements seem pretty great, I’m sure those will get much love here.
Personally, I’m most excited about log/slog and the experimental fix to loop variable shadowing. I’ve never worked in a language with a sane logging ecosystem, so I think slog will be a bit personally revolutionary. And the loop fix will allow me to delete a whole region of my brain. Pretty nice.
Am I reading it correctly that `clear` does different things for maps and slices? Why doesn't it remove all the items from the slice like it does with the map, or set the values in the map to the zero value like it does for slices? That seems like an easy thing to get tripped up on
That _is_ removing all the items from it; my point is that if you pass a map with `n` entries to clear, you end up with a map with 0 entries. If you do the same with a slice with `n` elements, I'd imagine most people would expect to end up with a slice with 0 elements, but instead you have a slice with `n` copies of the zero value.
But it's not "removing items", at least not for all meanings of the word "removing". You can see this with something like:
s := []string{"hello", "world", "foo", "bar"}
fmt.Println(s) // [hello world foo bar]
s = s[:0]
fmt.Println(s) // []
s = append(s, "XXX")
s = s[:2]
fmt.Println(s) // [XXX world]
Which will print back "XXX world" because it's using the same array, and nothing was ever "deleted": only the slice's length was updated.
This is why "delete(slice, n)" doesn't work and it only operates on maps.
I suppose clear(slice) could allocate a new array, but that's not the same behaviour as clear(map) either, and doesn't really represent the common understanding of "clearing a slice". The only behaviour I can think of that vaguely matches what "clearing a slice" means is what it does now.
Okay, yeah, that definitely isn't what I expected. It's pretty wild to me that `s = s[:2]` will ever work fine if `len(s) == 1`; I would have assumed that it would always be the same regardless of how the slice was created. Playing around with it, it seems like this means that if you pass a subslice to a function, that function can get access to things from the entire slice, including the portions that weren't in the slice passed in[1]!
I think I understand now why `clear` can't work on slices the way I think it should, but only because slices themselves don't work the way I feel even stronger that they should.
Slices in Go are a tad counter-intuitive, I agree, but the approach does make sense I think. It allows you to use "dynamic sized arrays" for most cases like you would in Python and not worry too much about the mechanics, at the price of some reduced performance, but in cases where this kind of performance does matter it allows you to be precise about allocations and array sizes. So you kind of get the best of both.
> The new built in functions min, max, and clear are a bit surprising, even having followed the discussions around them.
Was that discussion pre-generics?
Most of functions and libraries introduced in Go 1.21 is stuff people already put in community libraries (lodash being probably most popular, despise utterly nonsensical name not relating to anything it does) so it is just potentially cutting extra dependencies for many projects.
As a non-developer who has only gone as far as "hello world" in Go, I'm baffled by the idea that the log/slog thing is new - that seems like an absolutely basic language feature. TBH I'd say the same about min/max, but could forgive those being absent since Go isn't known for being numerically-focused...
> As a non-developer who has only gone as far as "hello world" in Go, I'm baffled by the idea that the log/slog thing is new - that seems like an absolutely basic language feature.
Then you'd be even more surprised when you learn that the vast majority of languages do not have standard logging library in core.
Most have one or few common libraries that community developed instead, but they are not in stdlib, and if stdlib has one it's usually very simple one (Go had standard logger interface that was too simple for example)
Just the standard "logging" - might not meet the definition of "structured logging", but at a glance it seems about as featureful as what is being added to Go right now.
Python has no equivalent of logger.With or other k/v pairs, which is what makes it structured logging and why it's interesting at all. Go has had unstructured logging since its early days.
I don't really follow what the benefit of the k/v thing is relative to just passing in a suitable string. I'd just assumed that the automation of "debug", "info" etc was what made it structured.
There's been a "log" package since forever, but slog adds structured logging with fields and some other things. I don't think many standard libraries have that built in?
Most languages include unstructured logging libraries in the standard library, including Go. Structured logging is usually provided by third party libraries.
The only other one I know would be C# with Microsoft.Extensions.Logging. Its so ubiquitous that 3rd party libraries work with its abstractions. Slog is a really good thing for Go
> New slices package for common operations on slices of any element type. This includes sorting functions that are generally faster and more ergonomic than the sort package.
> New maps package for common operations on maps of any key or element type.
> New cmp package with new utilities for comparing ordered values.
It is a big release, and the number of new stdlib packages (4) is relatively high for a Go release. That said, apart from the addition of some minor builtins (min, max, clear), the language isn't changing. That happened back in 1.18 with the introduction of generics.
Copy operations in Go are normally destination first, source second. This includes builtins like copy() and library functions like io.Copy(). Making it "src, dest" would make this one case the opposite of all the others.
Note that the order mimics variable assignment. You copy an integer with:
var src, dest int
dest = src // dest first, src second
Really glad to see some of these new packages (sort, map, etc) making use of generics. Should reduce the need for a lot of helper functions.
Also really excited to see loop capture variables finally getting sorted out. It is a constant pain point with new devs, and I have no good answer when they ask "but WHY is it like this?"
Because, historically, it's been like that all over, it's not just Go. For example, Python has the same loop variable reuse.
Probably comes from a time when compilers were a lot simpler, and all local variables were allocated stack space for the whole duration of the function call.
Nice - but hang on a second, I thought you cannot shadow language keywords in Go. So projects bumping to 1.21 in the future should be aware that you will run into compile time errors all of a sudden… doesn’t that actually break the compatibility promise?
I don't think Multipath TCP has been tested in enough environments to become the default yet. It's compatible with TCP, yes, but it's mostly useful for e.g. mobile devices that have multiple links like Wi-Fi and 4G, and it lets users to maintain TCP connection to a certain service even when moving across networks. Go seems to be server-oriented first, and there are some potential downsides to multipath TCP in a datacenter environment (e.g. potentially higher CPU usage, etc).
From what I heard the reason for not defaulting, its not yet acceptable across different platforms esp windows and most who’ll need this are data centers 5 years its too long, since linux kernel has accepted mptcp
Touché. When I noticed how happy I was that they added a min function, Stockholm syndrome came to mind.
Tbh I don’t see most of the standard lib benefitting from generics. For example, json.Unmarshal wouldn’t be dramatically better with generics — in practice, I rarely see runtime errors where I passed the wrong kind of thing to that function.
I personally love the slow pace of go development. I love that I don’t need to refactor my code every year to take advantage of whatever new hotness they just added. The downside is that stuff that’s annoying now will be annoying forever (like those times when you want a more expressive type system), but I’m willing to live with that.
Because great care was taken for the 1.0 release to be a complete design. Most language changes since then have just been fixes. That's why Go 1.0 code is basically the same as Go 1.21 code.
The compiler will tell you if the types aren't compatible, and this is only for primitive comparable types. What `min()` implementation could you have that even does something different?
Wait is this now heap allocating a value in every iteration of every loop? I hope that allocation is optimized out in every case where there isn't a closure over the loop variable?
The fine details resemble the analysis of correctness - all the evidence shows people expect per-iteration semantics with considerable frequency, and don’t rely on per-loop semantics with measurable frequency. But it’s impossible to completely automate that assessment. Likewise, it’s impossible to automatically detect code that will spuriously allocate because of the semantic transition.
Regardless of how the compiler is optimising this, I 100% agree that the old behaviour is unexpected and it’s caught me at least once. Really happy to see this (until recently) unexpected change.
I don't actually use Go, but I have used many other languages where it is like the old behavior. I learned once that I have to build the closure correctly to get the value I want and know now to do it. Don't have any statistics on whether I made that mistake again, but anecdotally I can't remember a case where I have. In their analysis they have found a lot of cases with that mistake, though. So I guess fair enough.
However, I wonder what it will mean if someone who mostly writes Go will now use another language? Will they be more prone to make that mistake?
It's hardly standard behaviour. I mean in Java for example there didn't used to be value types, so everything was a pointer and the effect of this would be the same as the new behaviour in Go.
The only lesson to be learned here is that languages are different. But I think the new Go behaviour is more ergonomic.
In Java you can only close over final variables, so you can't close over the loop variable at all. (Unless that changed since last time I used Java, which - granted - was a long time ago.)
The problem being fixed doesn’t affect only closures, but the body of the for loop itself. So for example taking the address of the loop variable would unexpectedly return the same value for the duration of the loop.
The way to view it is "unless there is syntactic sharing, it is a for loop, same as before". The compiler uses a syntactic test (with little knowledge of control flow or value use) to exclude loops from the change. This excludes most loops.
After the change, escape analysis figures out if the changed iteration variable actually needs heap allocation; in an internal sample of code that was actually buggy (i.e., biased, guaranteed to have at least one loop like this) for 5/6 of the loops escape analysis decided that heap allocation wasn't needed.
The reason this optimization isn't part of the language change proposal is that escape analysis is "behind the curtain"; ignoring performance, a program should behave the same with or without it, and it is removing heap allocations all over the place already. Escape analysis is also extremely difficult to explain exactly, so you would not want it in the spec, and "make escape analysis better" (that is, change it) is one of the prominent items in the bag of things to do for Go.
really hope Go has something like MERN for node.js or Django for Python, so I can use it for ready-to-go backend framework. There are gin and echo etc, just not as widely adopted as MERN or Django.
in some of my use cases, I need make sure source code is fully protected, neither Node nor Django can do that well, Go will be perfect as it is compiled, however there is nothing like MERN or Django in Go(yet). Another option will be Java, but I do not know Java.
What big thing are you expecting? Go is more of a stable language, a reliable and boring language for building software now that in ten years you can still maintain. Go isn't peaking, Go isn't an exciting or cool or hype language, it's just... there. And that's just fine.
Too many languages just started borrowing features from others, saying "yes" to every suggestion, until they got out of control and all over the place. Go says "no" more often than not. Which isn't always a good thing, mind; generics took a long time because they wanted to understand the problem and not add more like what happened to Java. The builtin min/max features up until this release only supported float64. Lots of small annoyances like that.
If you already have your own functions or variables named max, min, or clear in-scope, they will shadow the new built-in functions and your code will continue to use your own version of the functions. No breakage to existing identifiers that match the new function names.
(This is the same behavior as the append built-in function today, for example. These things in Go are _not_ reserved keywords, they are simply global functions that can be overridden at other scopes.)
In what way? Overall as a language, identifier shadowing is a feature of the language in nested scopes. Are you saying built-in identifiers (that aren't language keywords) should be treated specially and work differently than user-declared identifiers?
It's terrible, IMO, because every package that has generic words is now a variable name I can't use. A simple example which i find unreasonable:
package main
import (
"fmt"
"path/filepath"
)
func main() {
filepath := filepath.Dir("./")
//filepath.Dir('./") -> This is now a string. Can't use filepath package anymore
fmt.Println(filepath)
}
Now I have to make up variable names because `filepath` will shadow the package. How it this sensible in any shape? Zip just does this better by having @ in front of builtins.
you're complaining that the nomenclature for packages is not differentiated in a way that allows user code to have variable names with the same name as package names
you can still allow this, of course, by aliasing the package import
Loop variable capture is a foot-gun that in the last six years has cost me about 10-20 hours of my life. So happy to see that go. (Next on my list of foot-guns would be the default infinite network timeouts — in other words, your code works perfectly for 1-N months and then suddenly breaks in production. I always set timeouts now; there’s basically no downside)
Interesting to see them changing course on some fundamental decisions made very early on. The slices *Func() functions use cmp() int instead of less() bool, which is a huge win in my book. Less was the Elegant yet bizarre choice — it often needs to be called twice, and isn’t as composable as cmp.
The slog package is much closer to the ecosystem consensus for logging. It’s very close to Uber’s zap, which we’re using now. The original log package was so minimal as to be basically useless. I wonder why they’re adding this now.
I’ve already written most of what’s in the slices and maps packages, but it’ll be nice to have blessed versions of those that have gone through much more API design rigor. I’ll be able to delete several hundred lines across our codebase.
What’s next? An http server that doesn’t force you to write huge amounts of boilerplate? Syntactic sugar for if err != nil? A blessed version of testify/assert? Maybe not, but I’m happy about these new additions.