He still has a point. In theory you might need a language like Idris/Agda, but in praxis it still makes a big difference.
It is true that you will see that a function can return an error and that you choose to ignore it. It's also true that you can do the same in many other languages that use sumtypes.
But it is still different. Because while ignoring an error in go is as easy as putting an underscore next to the happy-case, in languages with sumtypes that doesn't work.
The equivalent in other languages would be to return a struct and then just access one value and ignore the other one. In that case, the practical implications would be the same.
But when using a sumtype, a few things change.
First, you can not just access the happy-case value, you are (or at least can be) forced to also "access" the unhappy-case value. Either in a pattern match, in a fold-function and so on.
You know have to return something, even if it is an empty value our "escaping" by throwing an error.
On top of that, what happens if a function can partially succeed? Take a graphql request as a practical example where this is quite common.
With Go's style of error handling, how do you model that? I.e. say you need to redactor a function that previously either succeeded or failed into one that now can partially succeed and fail.
In a language with sumtypes I would now switch from a sumtype Success|Error to more complex type Success|Error|PartialSuccess which makes it a breeze to refactor my code because the compiler will tell me all the places where I have to consider a new case and what it is.
I'm genuinely curious, how would you model that in Go and what implication would such a refactoring have on existing code?
You can always implement tagged unions in any language with untagged unions, so in a broad sense you can emulate sum types in situations where they make sense but use simpler code elsewhere. I might do that in C. Depends on the situation. It also obviously isn't a proper answer, I am sure you will agree, to just emulate the feature that I am saying is unnecessary. That works in Lisp where you can elegantly add language features with proper macros. In C, you cannot.
I would probably simply do in C the same thing as usual:
int
function1(int arg1, int arg2, int *out1, struct foo *out2)
{
if (part1(arg1, out1))
return 1;
if (part2(arg2, *out1, out2))
return 1;
return 0;
}
// Oh hmm, some callers can do something useful with a partial result.
// Assume the internals are more complex, because obviously in this simplified example you
// could just make them call part1 directly.
enum { SUCCESS, PARTIAL_SUCCESS, FAILURE }
function2(int arg1, int arg2, int *out1, struct foo *out2)
{
if (part1(arg1, out1))
return FAILURE;
if (part2(arg2, *out1, out2))
return PARTIAL_SUCCESS;
return SUCCESS;
}
This is compatible with old callers, even, who treat any nonzero result as failure and any zero result as complete success (the normal pattern in C).
Yes, the caller needs to check the result and avoid looking at out2 if you dont get SUCCESS and avoid looking at out1 if you get FAILURE. But this sort of thing is de rigeur in C. Your compiler (or a linter, and optional warning flags are essentially linters anyway) will warn you if you ignore the result and if you switch on the result will warn you if you ignore a case.
But obviously it is left up to you to avoid the "dont touch X if Y" stuff. Eh, that is in my experience not the hard bit of writing C. The hard bit is anything involving dynamic lifetimes or shared mutable state. The nice thing is that you can avoid this in C! Most people don't. The easy path is calling malloc everywhere and getting yourself into a muddle. The simple path, which is better in the long run, is to use values and sequential, imperative code. And if you do that, you realise that C's design makes way more sense. That is how it was designed to be used. Dynamic lifetimes of objects? It is like trying to use Rust to represent linked lists. People that say 'Rust sucks because double linked lists lol' are morally equivalent to people that say 'C sucks because malloc and free lol', it is like.... Yeah you aren't meant to do that!
> You can always implement tagged unions in any language with untagged unions, so in a broad sense you can emulate sum types in situations where they make sense but use simpler code elsewhere
I'm a bit confused now, since I don't see how this is related to the point I was making. You are right - with the exception that sumtypes are still more powerful since you cannot e.g. emulate GADTs with tagged unions, but for most cases in practice, I agree. Still, what's the point?
I also think we have a general misunderstanding, since you are saying:
> That works in Lisp where you can elegantly add language features with proper macros
But Lisp is dynamically typed, so talking about union types is meaningless in a dynamically typed language. That doesn't make any sense to me in this context.
And about C (which is statically typed): C does not have union types (and hence also no tagged union types). What C does have are (untagged) enums, but that's not the same thing. The crucial difference is that union types are ad-hoc whereas enums are statically defined. I think it is a bit confusing since C calls them unions - but in the context of this discussion it's important that they are very different things.
E.g., with union types you can do:
type union1 = string | int
type union2 = string | boolean
type union3 = union1 | union2 | float
// same as type union3 = string | int | boolean | float
The compiler must be able to resolve those things automatically. I'm hope I'm not completely mistaken here, But I believe there is no way to combine unions like this in C at the type-level. You would have to write those out by hand or generate the code. But if there is, please correct me.
> This is compatible with old callers, even, who treat any nonzero result as failure and any zero result as complete success (the normal pattern in C).
The idea or moviation was though that in a language with sumtypes (or tagged unions) the old callers would not be compatible. Trying to compile code against `function2` should fail. But it should not fail in an arbitrary way - it should fail by the compiler saying "hey, look, you handled the error case and the success case, but you also have to handle the partial-success case; and here is how the data you need to handle looks if it is partial-success: ...". That is what sumtypes give you, and I find that this enormously useful in practice. In a language without sumtypes you will not get this level of support by the compiler - that is the point I was trying to make.
It is true that you will see that a function can return an error and that you choose to ignore it. It's also true that you can do the same in many other languages that use sumtypes.
But it is still different. Because while ignoring an error in go is as easy as putting an underscore next to the happy-case, in languages with sumtypes that doesn't work.
The equivalent in other languages would be to return a struct and then just access one value and ignore the other one. In that case, the practical implications would be the same.
But when using a sumtype, a few things change.
First, you can not just access the happy-case value, you are (or at least can be) forced to also "access" the unhappy-case value. Either in a pattern match, in a fold-function and so on.
You know have to return something, even if it is an empty value our "escaping" by throwing an error.
On top of that, what happens if a function can partially succeed? Take a graphql request as a practical example where this is quite common.
With Go's style of error handling, how do you model that? I.e. say you need to redactor a function that previously either succeeded or failed into one that now can partially succeed and fail.
In a language with sumtypes I would now switch from a sumtype Success|Error to more complex type Success|Error|PartialSuccess which makes it a breeze to refactor my code because the compiler will tell me all the places where I have to consider a new case and what it is.
I'm genuinely curious, how would you model that in Go and what implication would such a refactoring have on existing code?
I imagine it would be quite different in praxis.