As someone who has done it in C, I would both agree and disagree --- reading the file format will definitely be easier since C lets you pick bits/bytes/words/etc. off the stream directly, but on the other hand the high-level structures (looping, etc.) would probably be easier with Python.
Overall, seeing as OP's Python implementation is less than 300 LoC while my C implementation was closer to 750, the Python might be easier. Then again, I followed the algorithms/flowcharts in the spec, which do a faster table-based Huffman decoding than the bit-by-bit of this Python version, and also implemented arbitrary(!) chroma subsampling as the spec allows, so it's not a direct comparison.
The main benefit I see here of Python over C is when your program doesn't work (yet): Python has nicer error handling built straight into the language.
If you just want to present a working implementation, both C and Python can express readable versions.
Easier for whom? Most programmers would not be able to follow that code. On the other hand, a really simple C or Python algorithm is more or less universally readable.
It's not so much about the algorithm, but the parsing itself.
Parser combinators give you one of the most straightforward ways to express parsers.
You are right, that the language you express your parser combinators in vs the language you express an alternative solution in also will have an impact.
You can in theory express parser combinators in Python. But it's gonna be a bit ugly, because Python has relatively clunky syntax for functions.
I just found a parser combinator library for Python https://pypi.org/project/parsita/ But it looks like they have to abuse some Python magic to make the code readable.
Alright, but jpeg decoding is nearly all algorithm, there's almost no parsing. Do you have a particular haskell jpeg decoder in mind that uses parser combinators?
I think being executable is really key, because one way I check my understanding of code is to make modifications to the code and see if they do what I expect them to do.
IMO the ideal setup would be for an article to contain pseudocode and to have supplementary executable code. This would allow the article to explain concepts without boilerplate and unnecessary details, but also ensure that those details are available for readers who wish to investigate further.
I still think it's an advantage to write the executable code as close to pseudocode as possible.
It’s not so much that Python requires boilerplate - all executable code does. For example, the article includes code to read the input data from a file, which is not relevant to the algorithm and could be omitted from a pseudocode version.
Similarly, executable code requires unnecessary detail, like the article’s `Stream` class. In pseudocode this could be replaced by simply saying “get the next N bits”.
I don't think we're defining "boilerplate" the same way, then. Reading from a file isn't something that every executable does, so when you add code for it, it's because you want your program to do it. Boilerplate is things that have to be added to every program regardless of whether they communicate anything that differentiates your program: main(), for example, or class in Java when you're not actually trying to define a class that will be instantiated by later code.
Hmm, I think you're using the word "boilerplate" differently than it is usually used. The changes you're proposing are functional changes that can't be transparently applied by a compiler/interpreter without changing behavior.
Currying is a subset of partial application, which Python supports via the standard library. Universal currying would be terser in some situations, but it conflicts with Python's args/kwargs implementation. Frankly, I don't think currying is a good thing even in languages such as Haskell where it doesn't conflict, as it makes it much less clear what the flow is when functions can be called with different signatures.
Anonymous function arguments, I'm just not sure what you're talking about there.
Explicitly converting higher order function results to lists is not boilerplate, it's a meaningful difference. The explicit conversion allows you to determine when code is actually executed, instead of having it executed eagerly. Universal lazy application like in Haskell has broader implications which are generally negative, particularly with regards to performance and being able to reason about execution: production Haskell often turns this off, and for good reason.
The lambda keyword is arguably boilerplate, but I'm skeptical of whatever alternate syntax you're proposing there. C#-style anonymous functions wouldn't work with Python syntax. If you're proposing people use the λ symbol, I'm going to frankly say that's a bad idea, because most developers' keyboards don't have that symbol. Like it or not, English with a latin keyboard is the lingua franca of software--if you want to target another language that would be reasonable, but that's a big choice with broader implications.
lambda's take up too much space and are really ugly. They're visual noise and serve no purpose. Besides, the name 'lambda' is bad, and intuitive. There's a reason why no new languages have chosen 'lambda' for anonymous functions. Python:
map, filter, zip, etc should produce concrete values, not iterators. It's annoying to constantly turn iterators into lists. Like if scala's map or fold took in a list and produced an iterator, that would be super annoying. There's no point. If you want an iterator, turn the collection into an iterator. There's no reason for a function to take a list and return an iterator, unless that function is 'toIter.'
Ugly python:
list(map(lambda x: 2*x, [1, 2, 3, 4]))
Julia:
map(x -> 2*x, [1, 2, 3, 4])
Or even better Julia:
2*.[1, 2, 3, 4]
I dunno, it's like Python tries its best to make code super verbose and ugly. I just don't understand why it is the way it is. It doesn't make sense.
> There's a reason why no new languages have chosen 'lambda' for anonymous functions.
Beware of people who say "there's a reason" and then don't say what that reason is.
I suspect the reason in this case is that languages which use C-ish syntax it would make no senes to have lambda as a keyword. Python doesn't use C-ish syntax, so that reason doesn't apply.
> It's annoying to constantly turn iterators into lists.
So don't. Why are you constantly turning iterators into lists?
> There's no reason for a function to take a list and return an iterator,
Here are 5 reasons:
1. There are many cases where the iterator will never be evaluated, meaning you save O(n) time.
2. Even in cases where you evaluate the iterator, you'll often have performed many transformations upon that iterator. Forcing 5 transformations all at once means you have 1 loop instead of 5, meaning again you save O(n) time.
3. It encourages writing functional code, which doesn't depend on order of execution. (This is why Simon Peyton Jones says Haskell has lazy evaluation: it keeps them honest about side effects).
4. Explicit is better than implicit. Guessing that a user will want a list is a reasonable guess, but you're doing a lot of work based on that guess, and some percentage of the time you'll be wrong. It's better to do the minimum work necessary and let the user explicitly determine what type of result they want.
5. Consistent extensibility: map, zip, etc. take lots of iterable types. You could attempt to return the same iterable type as you receive, but then there isn't an easy way for users to use zip on user-defined types. It's better to call the type's __iter__ function and then just use that, as this allows users to define higher-order functions on their own types.
It's pretty bizarre to assert that structures made up by some mathematician in a void should be the basis of a programming language intended to solve real-world problems, especially in response to giving real-world reasons for why it is the way it is.
To be honest, writing a jpeg decoder in C seems easier and more natural than doing it in python.