When Go 1.18 added generics in March 2022, the community split into two camps. Some had been asking for them for years and welcomed them as the missing step to modernize the language. Others suspected they would corrupt the simplicity that made Go attractive. Both had a point, and three years later we have enough perspective to judge without the passion of the initial moment.
What’s most interesting is that the outcome hasn’t quite gone either way. Generics in Go haven’t exploded into mass usage, but they also haven’t been an ignored hot potato. They occupy a very specific place in the language, and understanding what that place is says a lot about how community thinking has evolved.
What hasn’t happened
The most common prediction at first was that generics would replace interface{} in all contexts where it was used. That hasn’t really happened. If you look at idiomatic code in the most active Go projects (Kubernetes, Docker, CockroachDB, the standard library itself), the public surface is still dominated by traditional interfaces, concrete function signatures, and use of any where flexibility is genuinely needed.
The main reason is that generics, in the form finally approved for Go, have design constraints that make them less universal than C++ templates or Rust trait bounds. You can’t do anything you want: the operations applicable to a generic type are limited by the constraint interfaces you declare. That’s a design virtue, because it keeps compile times reasonable and avoids the kind of incomprehensible error messages C++ produces, but it comes at an expressiveness cost.
The second reason is cultural. Go’s philosophy has always preferred concretion over abstraction, and the community has internalized that enough that the default reflex is “write the concrete function first, generalize only if clearly needed”. The existence of generics hasn’t changed that reflex.
What has happened
Where generics have rooted strongly is in the deep layers of libraries. Open code from golang.org/x/exp/slices, from recent sync utilities, or from database clients like pgx, and you’ll find generics everywhere. That’s the kind of code where generalization makes sense: fundamental operations on collections, concurrency utilities, deserializers.
The pattern that has emerged is clear. Generic functions are used when: – The logic is truly identical across types, not just similar. – There’s performance to be lost with any and reflection, and the cost is noticeable. – The generic signature is clearer for the user than an interface.
Outside those cases, the idiomatic choice is still to write the concrete function, and you see a healthy resistance to “everything generic just in case”.
The slices case is illustrative. This library was experimentally available for some time and was promoted to standard in Go 1.21. It offers functions like slices.Contains, slices.Index, slices.Sort, implemented with generics. Anyone who wrote Go before 2022 had reimplemented these functions over and over for each type, because there was no reasonable alternative. Now they’re one line, and that’s a real and daily benefit.
What has surprised
One effect I didn’t anticipate is the impact on database clients. sqlc and especially pgx version 5 offer type-safe row scanning into structs via generics. Before, this was done with reflection and a good amount of ceremony; now you have a pgx.CollectRows[T] function that returns a []T directly, and the compiler tells you if the struct doesn’t match the columns.
This pattern has spread to JSON serialization, config parsing, and several testing frameworks. It’s the scenario where generics add the most: you keep type checking across an abstraction that previously lost it entirely.
The other place generics have taken unexpected hold is typed channels and concurrency primitives. Libraries like sourcegraph/conc offer worker pools, result groups, and similar primitives with generics, and the experience is qualitatively better than the equivalent with interface{} and type assertions.
What didn’t take off
The big exception is what some expected to be the star case: complex generic data types, like balanced trees, typed heaps, graphs. These exist in the ecosystem, but adoption has been moderate. The reason is simple: for modestly sized collections, native slices and maps are enough, and for cases requiring specialized structures, developers prefer battle-tested libraries with decades of use, which exist in non-generic form and work fine with occasional type assertions.
Generic methods are another story. Go doesn’t allow declaring generic methods on a non-generic receiver (the constraint lives on the whole type). This limits patterns like “a Map method on a collection” that other languages allow. The community has learned to live with that via free functions (Map(coll, fn) instead of coll.Map(fn)), but it’s a rough edge that will probably be smoothed in future versions.
Looking ahead
Three years in, my read is that Go has absorbed generics without losing identity. They’ve gone into the place where they add real value (low-level libraries, collection utilities, data clients) and stayed out of places where they would have added noise (everyday business code, public service APIs). That’s reasonably healthy.
What’s coming, based on proposals being debated, is loosening specific cases: allowing more type inference where today annotations are needed, and perhaps allowing generic methods in limited circumstances. None of this breaks the current model, just polishes it.
If someone asks whether they should start using generics in their Go code, my answer is still the one from three years ago: first write the concrete function. If you then find yourself copying it for three different types and the logic is identical, then yes, generalize. That modest criterion is what’s held up best.