Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Generalized Macros (ianthehenry.com)
97 points by ianthehenry on April 19, 2023 | hide | past | favorite | 74 comments


This post discusses a variety of Lisp macro that doesn't merely expand into something else, but actually reaches out to rewrite its surrounding context.

> So people have spent a lot of time thinking about ways to write macros more safely – sometimes at the cost of expressiveness or simplicity – and almost all recent languages use some sort of hygienic macro system that defaults to doing the right thing.

> But as far as I know, no one has approached macro systems from the other direction. No one looked at Common Lisp’s macros and said “What if these macros aren’t dangerous enough? What if we could make them even harder to write correctly, in order to marginally increase their power and expressiveness?”

The first example discussed is a defer macro that can be invoked with "no indentation increase and no extra nested parentheses".

As a macro-lover and an indentation-hater, I think this is a brilliant and hilarious idea.


I implemented something such a thing for Clojure a while back using reader macros (which Clojure wants to not support but thankfully you can hack them in).


Really interesting article. Maybe not a good idea as a language feature, but certainly an interesting one.

This is my new favorite typo:

    (defmacaron . lefts [key & rights]
      ~(,;(drop-last lefts)
        (get ,(last lefts) ,(keyword key))
        ,;rights))
We're working on a macro proposal for Dart and I wonder if users would like them more if we called them "macarons".


I noticed that as well, but it's not a typo: https://github.com/ianthehenry/macaroni/blob/master/src/init...


Oh whoops! That’s what I called them in my prototype so I didn’t shadow Janet’s built-ins. Forgot to update that one when I copied it to the blog post :)


I am diabetic so no, I could not use this feature.


Re prior art: I'm at least vaguely reminded of "expansion-passing style" by Dybvig, Friedman, and Haynes. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.64...

(Just from a quick skim of this long post.)


Riffing on related work, Macros for DSLs[1] notes non-locality (and there's racket's dsl emphasis[2] in general). Apropos composition, I liked the PADL'23 Modern Macros video[3] (seemingly twice submitted to hn without traction).

[1] Macros for Domain-Specific Languages https://par.nsf.gov/servlets/purl/10220787 https://docs.racket-lang.org/ee-lib/index.html [2] From Macros to DSLs: The Evolution of Racket https://drops.dagstuhl.de/opus/volltexte/2019/10548/pdf/LIPI... [3] PADL'23 Modern Macros https://www.youtube.com/watch?v=YMUCpx6vhZM


LISP has been singing homoiconicity as its feature. I lately start to think homoiconicity is really a wart. The macros are a system to program the code, while the code is a system to program the application. They are two different cognition tasks and making them nearly indistinguishable is not ideal.

LISP has a full-featured macro system, thus hands down beats many languages that only possess handicapped macro system or no macro system at all. It uses the same/similar language to achieve it is mere accidental. In fact, I think LISP is an under-powered programming language due to its crudeness. But it's unconstrained macro system allows it compensate the programming part to certain degree. As a result, it is not a popular language and it will never be, but it is sufficiently unique and also extremely simple that it will never die.

What if, we have a standalone general-purpose macro system that can be used with any programming languages, with two syntax layer that programmers can put on different hat to work on either? That's essentially how I designed MyDef. MyDef supports two forms of macros. Inline macros are using `$(name:param)` syntax. Block macros are supported using `$call blockmacroname, params`. Both are syntactically simple to grasp and distinct from hosting languages that programmers can put on different hats to comprehend. The basic macros are just text substitution, but both inline macros and block macros can be extended with (currently my choice) Perl to achieve unconstrained goals. The extension layer can access the context before or after, can set up context for code within or outside, thus achieve what lisp can but using Perl. We can extend the macros using Python or any other language as well, but it is a matter of the extent to access the macro system internals.

Inline macros are scoped, and block macros can define context. These are the two features that I find missing in most macros systems that I can't live without today. Here is an example:

    $(set:A=global scope)
    &call open_context
        print $(A)
    print $(A)

    subcode: open_context
        set-up-context
        $(set:A=inside context)
        BLOCK # placeholder for user code
        destroy-context


> I lately start to think homoiconicity is really a wart.

I sorta agree. The simplicity of the syntax and representation makes it particularly amenable to dynamic modification above basically every other language though. There's a bunch of other properties that make this to case though, such as a very dynamic type system, first-class functions, extremely simple syntax, etc. so it's really a combination of a bunch of factors.

That being said, I think homoiconicity is actually a useful feature, but runtime macro expansion is the dangerous part. What I'd really like to see is a Lisp with very well defined execution orders. That is, macros must be expanded at compile time, and with clear symbols for defining what runs and when. I'm not talking about something like `(macroexpand ...)`, more like `(comptime (my-macro ...))`... `(define (my-func) (my-macro+ 'a 'b 'c))` where it's explicit that a macro executes at compile time, and usage of that macro must be denoted with a special character (`+`, in my example).

I think a general purpose macro language is only as useful as the average code-gen/templating language. It's just string manipulation, which can be harder to reason about than true metaprogramming and might result in some ugly/verbose output and having to context-switch between 2 entirely different languages often. What's particularly powerful about Lisp macros is that usage of them doesn't look any different than a normal application usually, and writing a macro is only marginally different than writing a normal function.


> I think a general purpose macro language is only as useful as the average code-gen/templating language. It's just string manipulation, which can be harder to reason about than true metaprogramming ...

I would like you to reconsider. Predicting a program output is hard. So in order to comprehend a macro programed as code, one need run the macro in their head to predict its output, then they need comprehend that output in order to understand the code. I think that is unreasonable expectation. That's why reasonable usage of meta-programming is close to templating where programmer can reason with the generated code directly from the template. For more higher-powered macros, I argue no one will be able to reason with two-layers at the same time. So what happens is for the programmer to put on his macro hat to comprehend the macro, then simply use a good "vacabulary" (macro name) to encode his comprehension. And when he put his application programming hat, he takes the macro by an ambiguous understanding, as a vocabulary, or some one may call it as a syntax extension. Because we need put on two hats at different time, we don't need homoiconicity to make the two hats to look the same.


Or you do:

``` (require (ast macroexpand))

(display (macroexpand my-macro arg1 arg2)) ``` and call it a day.

My argument isn't that the context-switch isn't there, it's that the context switch will happen regardless and having to think in 2 different languages is more mental overhead than is necessary. I do agree that homoiconicity is not a requirement, but it is nice to be able to do it all in one go and with no additional tooling, editor, etc.. In reality, a sizeable chunk of Lisp programmers are executing their code in a REPL continuously as part of their core development loop, there's no tooling (context) switch, no language context switch, and barely a macro vs application code context switch.

To illustrate, Rust macros are basically that. They have a substantially different syntax to normal Rust code that make it very difficult to quickly grok what is going on. It's a net negative IMO, not a positive.


> To illustrate, Rust macros are basically that. They have a substantially different syntax to normal Rust code that make it very difficult to quickly grok what is going on. It's a net negative IMO, not a positive.

Yeah, more like a syntax extension than macro. But I am saying that you need both. Some time you need powerful macro ability to extend the language. Sometime you just need templating to achieve the expressiveness. With LISP, I get it that you are programming all the time, never templating, right? But I guess you only appreciate templating when you use your macro system as a general purpose system . The benefit of general purpose macro systems is you only learn one tool for all languages, rather than re-learn the individual wheels. And when you judge a language, you no longer bothered by its syntactic warts because you can always fix the expressive part with your macro-layer.


> That being said, I think homoiconicity is actually a useful feature, but runtime macro expansion is the dangerous part.

Practically, why would you ever want a runtime macro expansion?


Well, part of the problem is is passing around quoted forms (ie. `(quote ...)` or in may Lisps, `(...)). If that form contains a macro, and a sizeable chunk of Lisps stdlibs are macros, you probably need to implement runtime macro expansion at least partially. So in order to avoid implementing runtime macro expansion, you need to break the semantics of what a quoted form is, or disallow them entirely. The implementation for the former could get really complicated depending on the cases you want to support, resolving symbols in the corresponding scope comes to mind as being particularly challenging. Removing quoted forms entirely just really isn't an option, it's required for one of the most powerful parts of Lisps in general: built-in syntax templates. So we're back to breaking the semantics of quoted forms, which I think can be done reasonably, if not difficult to implement.


In another word, it (to have runtime macro expansion) is a side effect, a compromise, a wart, rather than a design goal, right?


It can certainly be a design goal if you want it to be. Sometimes truly dynamic programming is what you want and need. I, personally, wouldn't want to write any code like that because I mostly write software for other people that runs while they aren't looking, not for myself that runs when I execute it. Lisps are popular in academic settings where the program behavior is the topic of interest and maximum flexibility is a tool to accelerate progress. These types of scenarios usually have the person who wrote the code directly involved in it's execution, as opposed to service development where you want to be more sure it'll actually work before you deploy it.


Most common case is probably trying to pass `and` to something that applies it to a list, `fold` or `reduce` or similar, and being told some variant of "no deal, and is a macro, not a function" with workarounds like `(reduce (lambda (x y) (and x y)) list))`


Mainly, because it's always run-time. Your program's compile-time is the compiler's run-time. Compilers can be available in a shipped application.


Thanks for posting. A bunch of time ago I was actually looking for MyDef for some reason, but I couldn't remember the exact name and couldn't find it.

You could have a replacement for M4 there.

I made a macro preprocessor in 1999 called MPP which also was able to preserve whitespace in perpetrating multi-line expansions, and so suitable even in indentation-sensitive formats. MPP has scopes and also a namespace system. I didn't develop it further because right around that time, I discovered Lisp and, you know ...


m4 is like that. Lots of templating languages are too. Code generation is useful and roughly what you end up with if the language isn't expressive enough, e.g. C or go. Is yours doing anything different to those?

What the lisp approach gives you is the macro has the parsed representation of the program available to inspect and manipulate. It doesn't have to expand to some fixed text, it can expand into different things based on the context of the call site.

Various languages have decided that syntactically distinguishing macros from functions is a good idea. Maybe the function is foo and a macro would be $foo. For what it's worth I think that's wrong-headed, and the proper fix is to have 'foo' represent some transform on its arguments, where it doesn't matter to the caller whether that is by macro or by function, but it's certainly popular.


M4 only do token-level macros or inline macros. M4 macros are identifier that can't be distinguished from underlying language. M4 macros does not have scopes. M4 does not have context level macros. M4 does not have full programmable ability to extend syntax.

I can define any macro lisp can define, just not with LISP syntax. I do not have a full AST view of the code, due to the its generality that it does not marry to any specific underlying language. But I can have extensions that is tailored to specific language and do understand the syntax. For example, the C extension can check existing functions and inject C functions. MyDef always can query and obtain the entire view of the program, and it is up to the effort in writing extensions to which degree we want macro layer to be able to parse. Embedding a AST parser for a local code block is not that difficult.

It's like the innerHTML thing, for me, I always find the text layer (as string) is more intuitive for me to manipulate than an AST tree. If needed, make an ad-hoc parser in Perl is often simple and sufficient, for me at least.


Interesting idea. I'll play around with when I get home.

I use Kernel a lot, which allows you to write first-class operatives which can influence the bindings of their caller, but they don't allow modifying the body of the calling function.


Can you recommend an implementation for experimenting with Kernel? I'm unsure about the implications of the wrap/unwrap primitives and how they interact with apply, so more interested in a correct/reference implementation than a high performance one.


klisp is the most complete implementation I've used. Website is now down and original bitbucket host too, but mirror here: https://github.com/dbohdan/klisp

For slightly better performance, there's bronze-age-lisp, which uses klisp and x86 assembly. Mirror: https://github.com/ghosthamlet/bronze-age-lisp

Performance will never be great due to the nature of the language, which is incompatible with usual forms of compilation.

I'll try to give a brief explanation of wrap and unwrap.

A combination is of the form `(combiner combiniends)`, where combiner must be either an operative or applicative. In the case that it is operative, the combiniends are passed verbatim to the operative, without being reduced. If the combiner is applicative, the combiniends are reduced, by evaluating each item in the list using the metacircular evaluator, until a list of arguments is returned. The arguments are then passed to the underlying combiner of the applicative.

Operatives are constructed using `$vau`, and applicatives by using `wrap` on another combiner. Usually the underlying combiner is operative, but the language used in the report is clearly permitting you to wrap other applicatives too, so in the case that you evaluate an applicative whose underlying combiner is applicative, and the underlying combiner of that is operative, then the list of combiniends would be reduced twice before being passed to the final operative. I've honestly not encountered a single use-case for this in my time using Kernel, but who knows. It might just be easier to consider that `wrap` wraps operatives into applicatives.

The description of the evaluator from section 3 of the report is a pretty clear explanation of what happens.

  * If the expression to be evaluated is a pair, then:
    * The car of the pair must be a combiner
    * If the combiner is operative, call the operative with the cdr of the pair
    * If the combiner is applicative, evaluate cdr of the pair to produce an argument list 'd. eval the cons of the the underlying combiner of the applicative with 'd.

    ($define! eval 
        ($lambda (o e)
            ($if (not (environment? e)) (exit))
            ($if (pair? o)
                 ($let ((c (car o)))
                      ($if (operative? c)
                           (call c (cdr o) e)
                           ($if (applicative? c)
                                (eval (cons (unwrap c) (eval-list (cdr o) e)) e)
                                (error "not a combiner in combiner position"))))
                 o)))

    ($define! eval-list
        ($lambda (l e)
            ($if (null? l)
                 ()
                 ($if (pair? l)
                      (cons (eval (car l) e)
                            (eval-list (cdr l) e)
                      (error "operand to applicative must be a list"))))))


Yep, it's the `(wrap (wrap operative))` which stands out as strange. Thank you for the evaluator - factoring as a call which only acts on operatives makes it clear.

The inner call I expected was more like:

    ($cond (operative? c)
           (apply c (cdr o) e)
           (applicative? c)
           (apply (unwrap c) (eval-list (cdr o) e) e)
           (error "bad times"))
where that would only convert an applicative into an operative once. Interesting that you haven't come across a use for the multiple reduction on arguments.

The difficulty I have with the multiple wrap model is you don't know how many times to evaluate the arguments until you have evaluated the car of the list (I think your eval is missing an evaluation on the head of the list, probably at the let binding, as otherwise ((lambda (x) x) 42) fails).

- If the car of the list is an operative, call it on the arguments. Simple.

- If the car of the list is a applicative (wrap operative), eval the arguments, call it on the result.

Those extend easily to eval the head of the list first, once, and then test what sort of function it is. However given a function which evaluates the arguments N times (via N wraps over an operative), for N > 1, it feels like the head of the list should also be evaluated N times. But it can't be, because we don't know N until we know the head of the list.

That could be made to work with semantics of continue evaluating the head of the list until a fixpoint (e.g. a self-evaluating function), and then look at the tail, but that's a significant increase in complexity over evaluating the head exactly once.

The alternative setup would be (= (wrap (wrap an-operative)) (wrap an-operative)) where N wraps have the same effect as one. You lose some syntactic expressivity - functions that actually want to evaluate the arguments multiple times would have to call eval themselves instead of writing an extra wrap - but I think you get simpler core semantics.


> I think your eval is missing an evaluation on the head of the list, probably at the let binding, as otherwise ((lambda (x) x) 42) fails)

You're correct. I was going off the top of my head and forgot the precise implementation, but it's stated clearly in the report.

    Otherwise, o is a pair. Let a and d be the car and cdr of o. a is evaluated in e; call its result f.
> However given a function which evaluates the arguments N times (via N wraps over an operative), for N > 1, it feels like the head of the list should also be evaluated N times. But it can't be, because we don't know N until we know the head of the list.

This is why we cons the underlying combiner with the reduced argument list, then recursively call eval. It will keep on reducing until the combiner is operative.

> functions that actually want to evaluate the arguments multiple times would have to call eval themselves instead of writing an extra wrap

There's a discussion in the report as to whether wrap should even be in the language, since you can create a wrap with an operative, but then unwrap is non-trivial.

Honestly, I think it's much simpler to only wrap operatives. I don't think our minds are good at comprehending what will happen as a result of evaluating lists of arguments multiple times, and it's easier to implement - you can make the evaluator simpler by removing the recursive call to eval for applicatives and replace it with a direct call to the underlying operative.

> The alternative setup would be (= (wrap (wrap an-operative)) (wrap an-operative)) where N wraps have the same effect as one.

I think attempting to wrap an applicative should error in the same way attempting to unwrap an operative would, since our expectation would be that `wrap` takes an operative as its argument rather than any combiner.

TBH, I've not really searched for problems for which evaluating the arguments multiple times would be a nice solution. I think they probably exist but as you suggest, the programmer could approach this by providing an operative which performs multiple reduction.


I wonder if this might be made even more powerful by using a zippers instead of a single left-right pair; then, the generalized-macro could traverse the entire top-level form?


I think that a zipper would provide a much nicer interface for making "distant" transformations, but just to be clear you can traverse the entire AST with this approach by recursively returning macros that return macros and then re-assembling the entire tree once you get where you're going. Just, ah, not the easiest code to write :)


Macros tickle a certain peculiar part of my brain. I have this idea of making a lisp that at its core is just some static single assignment language, or some other type of IR. And then all the traditional control flow features would be implemented using macros. You could even implement optimization passes as macros. I haven’t been brave enough to actually try it, but I imagine if I did I would need some more powerful macro features like this.


Is this the idea behind shen and k lambda?


Regular macros can give you this in a disciplined way:

You make a macro called crazy-macrolet which is used like this:

  (crazy-macrolet ((crazy-macro (left-forms right-forms arg ...)
                     ....)
                   (other-crazy-macro (...)))
    body)
Inside body, you use the local crazy macros. A little language made of these crazy macros can be wrapped up in a big macro.

  (defmacro bobs-crazy-dsl (&rest forms)
    `(crazy-macrolet ((bobs-crazy-macro ...))
        ,@forms))
Then, you can only use bobs-crazy-macro in code wrapped in (bobs-crazy-dsl ...).

crazy-macrolet needs to implement a code walker to in order to expand those macros.

Macros can be context-dependent without having access to the literal forms to the left or right (or elsewhere).

In TXR Lisp, I implemented tagbody as a macro, providing a measure of CL compatibility. The go operators are local macros. They do not expand in a context-free way; they communicate with the surrounding tagbody.

So for instance if go is asked to jump to a nonexistent label, it errors:

  1> (expand '(tagbody (go a) b))
  ** go: no a label visible

  1> (expand '(tagbody (go a) a))
  (let ((#:tb-id-0019
         (gensym "tb-dyn-id-"))
        (#:next-0020
         0))
    (sys:for-op ()
      (#:next-0020)
      ((sys:setq #:next-0020
         (block* #:tb-id-0019
           (sys:switch #:next-0020
             #(((return* #:tb-id-0019
                         1))
               ()))
           ())))))
So obviously, (go a) is behaving differently based on whether it can "see" that there is an a in its context, on the left or right side.


This post persuaded me to not take much of a look at Janet. This is also related to why I think that Lisps haven't become the most widely-used programming languages.

Lisp enthusiasts like to point out the power of macros, and macros are the raison d'être for Lisp's homogeneous s-expression syntax. Most other features in Lisp (such as first-class closures and higher-order functions) can exist without s-expressions, but the powerful thing about s-expressions is that they enable Lisp macros.

But with great power comes great responsibility. When I'm writing a program I want as little responsibility as possible while still being able to solve the problem at hand. I don't want to be responsible for memory management and bounds checking, and I don't want to be responsible for the hygiene of my macros at both the definition and the call site.

With C, the responsibility of memory management and bounds checking comes with a power that people actually need to solve problems. For me these problems usually come up in the context of writing my own hobbyist interpreters/compilers, but there are a lot of real world cases where these come up. But often you don't need the capabilities of C, and I'd argue as a result that there are a lot of cases where using C is a bad idea because it's not the best way to solve your problem.

And here's the hot take: the power of Lisp macros isn't actually ever worth the responsibility in my experience. The problem Lisp macros solve is "this code is more verbose/ugly/boilerplate-y/etc. than I want it to be", which just isn't the problem you're writing the program to solve. Whenever you reach for a macro, there's another tool you could be reaching for to solve the actual problem at hand. At the very least, you can just write the code the macro would expand into. There's inherently never a case where the macro is the only way to solve the problem.

If you're good at writing macros, you won't always get burned by them, but nobody is ever perfect at writing macros, so everyone gets burned sometimes. If you're writing software that you actually need to work, the risk is rarely worth it.

When I have written production code in a Lisp (mostly Clojure) I've rarely reached for macros, and often bugfixes have been removing a macro that was of the "so preoccupied with whether or not they could, that they didn't stop to think if they should" variety. And if you spend enough time avoiding and removing macros, you start to wonder why you're destroying your eyesight trying to match parentheses, when the entire reason for the parentheses is to enable something you have to avoid and remove.

And don't get me wrong: macros are cool. "Because I like them" is a totally valid reason to write macros and Lisp.


> At the very least, you can just write the code the macro would expand into

Whenever you write code, you could just write the function it calls.

One purpose of macros is to create descriptive notations. That's one way to create very readable, maintainable code, ... Lisp itself uses macros in the language everywhere: to define functions, to define classes, to define namespaces, to provide control structures, ... the core special operators are at a minimum and the large amount of syntactic operators are written as macros. As a developer I can use the same power to shape the operators of my domain, beyond what functions offer me and with compile time efficiency & syntactic control/abstraction.


> Whenever you write code, you could just write the function it calls.

This is overly-glib. Calling a function doesn't come with the risk that calling a macro does, which I'm sure someone named "lispm" is aware of.

> Lisp itself uses macros in the language everywhere: to define functions, to define classes, to define namespaces, to provide control structures, ... the core special operators are at a minimum and the large amount of syntactic operators are written as macros. As a developer I can use the same power to shape the operators of my domain, beyond what functions offer me and with compile time efficiency & syntactic control/abstraction.

This just shows you don't understand the power/responsibility argument you're responding to.

As a developer I don't have the resources to design and test my code to the extent that the language creators do.

Lisp itself uses direct memory access to define the garbage collector, and you could use the same power to shape the memory usage of your domain, beyond what garbage collection offers and with compile time efficiency. But surely we can agree that's a terrible idea in most cases.


There is a lot of irrational fear of macros.

> As a developer I don't have the resources to design and test my code to the extent that the language creators do.

Sure you could wait many years for the 'language designer' to provide you with new syntactical support or just write it in an afternoon yourself. It's the same crazy idea as writing a new class with its methods, instead just using the pre-made class hierarchy from the language or library designer.

Surprise: programming macros can be learned and it does not hurt.

> But surely we can agree that's a terrible idea in most cases.

Not sure who this we is, but it is not me.

It's exactly what the Lisp Machine did: you could use the same power to shape the memory usage of your domain. It had a bunch of different garbage collectors working side by side and a programmable memory management. All that was written in Lisp itself.


> There is a lot of irrational fear of macros.

I'm just going to copy from my other post:

Land Of Lisp[1] has a section titled "Macros: Dangers and Alternatives". The elisp docs[2] contain a section on "Common Problems Using Macros". Paul Graham's On Lisp[3] contains a chapter on variable capture which consists mostly of sections on avoiding variable capture problems, followed by a chapter called Other Macro Pitfalls.

I suppose it's possible that the authors of Lisp dialects are being irrational...

> Sure you could wait many years for the 'language designer' to provide you with new syntactical support or just write it in an afternoon yourself. It's the same crazy idea as writing a new class with its methods, instead just using the pre-made class hierarchy from the language or library designer.

Given all the work that has been done in modern languages to avoid the pitfalls of classes such as multiple inheritance, this is a better example of my point than you think it is.

> Not sure who this we is, but it is not me.

> It's exactly what the Lisp Machine did: you could use the same power to shape the memory usage of your domain. It had a bunch of different garbage collectors working side by side and a programmable memory management. All that was written in Lisp itself.

The Lisp Machine is not "most cases". It's unsurprising that an attempt to run Lisp directly on hardware would need to do some direct memory access, but it's also absurd to extrapolate that to claim that this is something that is a good idea in most cases. Your average Lisp program does not and should not directly manage memory, and claiming otherwise is obviously just trying to win a silly point. I don't believe for a second that you do direct memory access in your Lisp programs on a regular basis, unless you're writing some very low-level stuff like hardware or a Lisp compiler/interpreter, and again, that's not "most cases". You're simply making a disingenuous argument here.

[1] https://www.oreilly.com/library/view/land-of-lisp/9781593272...

[2] https://www.gnu.org/software/emacs/manual/html_node/elisp/Pr...

[3] https://redirect.cs.umbc.edu/courses/331/fall10/resources/li...


> Macros: Dangers and Alternatives

here is a simple LispWorks program using macros to provide manual memory management:

  CL-USER 17 > (defresource rarray (size)
                 :constructor (make-array size)
                 :initial-copies 0)
  RARRAY
The above macro form defines a new manually managed resource named RARRAY. There is a constructor for allocating new RARRAY objects. Here the constructor just allocates an array of a certain size.

  CL-USER 18 > (using-resource (ra rarray 10)
                 (setf (aref ra 3) 10)
                 (incf (aref ra 3) 32)
                 (aref ra 3))
  42
The above USING-RESOURCE macro looks up an array of size 10 from the resource pool. If there is none, it allocates one. In the context of the macro form, the array is marked in the pool as used and provided to the code. Upon leaving the dynamic context of USING-RESOURCE, the array will be returned to the pool and marked as unused.

Bonus: DEFRESOURCE, USING-RESOURCE, SETF and INCF are all macros.


> the power of Lisp macros isn't actually ever worth the responsibility in my experience

Expand your experience.

> At the very least, you can just write the code the macro would expand into.

You can "just" get it wrong when the code is used in many places.

Consider this as the hot take: macros reduce the number of characters you have to type. I haven't seen anything that beats macros in this metric.


More importantly, they reduce the number of characters you have to read and maintain.

Macros allow writing code that's both compact and readable [0], in lesser languages you have to choose.

[0] https://github.com/codr7/cl-redb/blob/ff3a34a31ced7a9668fc95...

[1] https://gist.github.com/codr7/4bb9442c0c66411643eddd8db0164a...


> More importantly, they reduce the number of characters you have to read and maintain.

So does giving all your variable names one-letter names and putting your entire codebase on one line. Surely we can agree that character count is a poor measure of readability or maintainability.

> Macros allow writing code that's both compact and readable [0], in lesser languages you have to choose.

Macros aren't the only abstraction that does that, even within Lisp.


> So does giving all your variable names one-letter names and putting your entire codebase on one line. Surely we can agree that character count is a poor measure of readability or maintainability.

Which is why pg advocates symbol count. (Although he seems to love brevity too.)


I urge you to try that theory on the expansion linked in my comment and see where it takes you compared to the macro code.

No, but it's the most effective one.


> No, but it's the most effective one.

That's a perfectly valid opinion which is probably based on your experience, just as my opinion that macros aren't a good tradeoff is based on my experience.

I understand the initial joy of macros, and I'm well aware they're powerful for generating code. But in my experience, they are also not well encapsulated, and as a result, areas of code which use macros tend towards being write-only, where the code becomes hard to read and you're afraid to make changes lest the fragile pile of expansions you've created comes crashing down. In the short run, they seem great, but in the long run, the errors they create don't seem worth it.

I don't know why I've experienced this and you haven't, but I'm sure you have some different experience with macros that leads you to have a different feeling on macros than I do. If you disagree with me, I'd appreciate it if you approached that disagreement without making without assuming I'm an idiot with no experience, and without making arguments like "fewer characters = better" which even you don't believe.


You're right, I do think macros make a good tradeoff. And you're not alone in thinking otherwise, I remember attending a heated panel debate at the International Common Lisp Conference about the pros and cons of macros.

In my case it's based on 20-ish years of using Common Lisp for personal projects.

Admittedly mostly prototyping and figuring things out, but that's my main use case for CL since using it at work is mostly politically impossible.


[flagged]


> > assuming I'm an idiot with no experience

> That's what you're trying hard to look like, unfortunately; if you're not, then stop.

I encourage you to take a deep breath and consider how you would like to treat people before you make future comments on the internet. Perhaps it's unwarranted optimism on my part, but I would guess you want to behave better than this.

I've responded in other comments to everything else you said in this post. You're not saying anything which hasn't been said more kindly by other people in this thread.


Please don't plant booby traps in discussion.

When I mention in a posting the topic of whether or not I'm an idiot and such, that's a signal to people that it's safe to debate that in whatever way they want. (I've been an idiot here and there, and may not be done. I could dig up objective evidence of that and expect people to agree.)

> I've responded in other comments to everything else you said in this post.

Maybe, but not to the message between the lines. Your position can be summarized as "everything is subjective and relative; my negative experience with a tool or technique speaks to the topic in a equally valuable way as do successful experiences by people making something reliable, tested, documented and maintainable by someone other than its author".

That strikes me as objectively untrue, and conductive toward coming to casually dismissive conclusions.

"Well, I think Rust is unproductive (at least for me) because I couldn't figure out the insane compiler messages. But some people who have never seen a type are struck by the novelty. That's okay, 'I like it' is perfectly valid. Everyone's experience is equal!"


> Expand your experience.

That's universally good advice, which you should follow too.

But neither of us is going to experience all that exists to experience, so it makes sense to prioritize. I've experienced enough pain debugging Lisp macros to make an educated decision to deprioritize further exploration in that area.

> You can "just" get it wrong when the code is used in many places.

True, which is why I generally use other abstractions to avoid repeating myself.

> Consider this as the hot take: macros reduce the number of characters you have to type. I haven't seen anything that beats macros in this metric.

Sure, but within reasonable languages that's not a metric which matters. Typing has never been the bottleneck of my development. Even in absurdly verbose languages like C++, IDEs generate a lot of the code for you: the bottlenecks in C++ are things like stitching together different ways of doing the same thing which were used in libraries or different parts of the codebase.


I appreciate and I'm impressed you're gratuitously positive.

I think you mean within reasonable programs that's not a metric that matters because most programs solve problems that don't need much autogeneration to be solved. For problems that do need autogeneration macros are one of the strongest ways.

A good way to become convinced of the power of macros is to try and write a web app in Arc to generate lots of HTML.

IDEs that generate a lot of the code are proof macros are needed because when you want to change the autogenerated code you have to do it by hand. Autogeneration can be done with macros and is simpler with macros.


> I think you mean within reasonable programs that's not a metric that matters because most programs solve problems that don't need much autogeneration to be solved. For problems that do need autogeneration macros are one of the strongest ways.

No, I meant languages (as in programming languages), not programs.

> A good way to become convinced of the power of macros is to try and write a web app in Arc to generate lots of HTML.

I'm already convinced macros are powerful. The problem is that they're also extremely error prone, and can be extremely difficult to debug when errors inevitably occur.

This isn't controversial. Land Of Lisp[1] has a section titled "Macros: Dangers and Alternatives". The elisp docs[2] contain a section on "Common Problems Using Macros". Paul Graham's On Lisp[3] contains a chapter on variable capture which consists mostly of sections on avoiding variable capture problems, followed by a chapter called Other Macro Pitfalls.

These are the people who like Lisp writing these things. The controversial thing I'm saying is that I don't think the power/danger tradeoff is worth it. That's pretty subjective and it's reasonable to disagree with that, but you can't reasonably disagree that macros are error prone when even the people writing Lisp variants are saying they are error-prone.

The best HTML generation I've experienced is with Ruby templates. They're not as terse as Lisp macros, but I don't end up having to debug them very often, and when I do end up debugging them, the bugs are usually trivial to find and fix.

> Autogeneration can be done with macros and is simpler with macros.

I have never spent hours debugging an IDE autocomplete. I have spent many hours debugging macros.

You're only looking at the positives of macros and ignoring everything I've said about the negatives.

[1] https://www.oreilly.com/library/view/land-of-lisp/9781593272...

[2] https://www.gnu.org/software/emacs/manual/html_node/elisp/Pr...

[3] https://redirect.cs.umbc.edu/courses/331/fall10/resources/li...


Thanks for sharing your experience with Ruby templates and for the links.

> I don't think the power/danger tradeoff is worth it.

The power/danger tradeoff might not be worth it for the overwhelming majority of programs. For some kinds of programs I don't see how this power can be got with anything less powerful than macros.


> This post persuaded me to not take much of a look at Janet.

Do you mean to say that you were intent on taking a good look at Janet when you woke up this morning, but because of this post specifically your mind has been radically changed?


Eh, "intent" and "radically" are strong words, but it was on the back burner as something to look into further because a previous post had mentioned the macro system was a bit different, and now it's not.


The author wrote a whole book on Janet, with 11 of the 13 chapters focusing on something else than macros: https://janet.guide/. I've been reading it in the last few days and it's very nice, there are lots of things to play around with outside of macros.


> you start to wonder why you're destroying your eyesight trying to match parentheses

Manually matching parentheses is a non-issue since tools like Parinfer were created: https://shaunlebron.github.io/parinfer


Parinfer only helps you write parentheses, it doesn't help you read them.


That’s what rainbow parens are for


    (defmacro ->
      "Threads the expr through the forms. Inserts x as the
      second item in the first form, making a list of it if it is not a
      list already. If there are more forms, inserts the first form as the
      second item in second form, etc."
      {:added "1.0"}
      [x & forms]
      (loop [x x, forms forms]
        (if forms
          (let [form (first forms)
                threaded (if (seq? form)
                           (with-meta `(~(first form) ~x ~@(next form)) (meta form))
                           (list form x))]
            (recur threaded (next forms)))
          x)))


It’s funny to praise language niceties and speak against macros in the same post. Macros are just the easiest way to add new language features to your code.

My fav example is automatic sql table and column typo checking. The macro introspects the db and you get syntax errors if the table/column isn’t found. Same idea can also be used for csv and json. Or api spec file on different server-the compiler can make http requests to show you syntax errors


> My fav example is automatic sql table and column typo checking. The macro introspects the db and you get syntax errors if the table/column isn’t found. Same idea can also be used for csv and json. Or api spec file on different server-the compiler can make http requests to show you syntax errors

But the problem in all these cases isn't "I want syntax errors if this table/column doesn't exist or my http request doesn't fit spec", it's "I want to make sure my table/column exists and my http requests are in spec". Unit tests can do that. What unit tests can't do is go into the code under test and modify it so that it breaks in production. The macros you're describing can do that.

The macros you're describing are cool, but they don't do anything that can't be done with unit tests, and they're not without risks. You're essentially running test code in production.


Again the same point applies. Either you become fan of low level languages with little-no time saving features, or you are willing to use helpful high level language features (or add your own).

You can use unit tests as replacement for type checking system, for example. Test code in prod and all that.

I’ve also built macros which snapshot check output of functions for tests. So the test just calls a bunch of functions, and it saves sexps in a test dir with the expressions commented. Difficult to do that in Go or Java. Would you just use ruby/python? or some bespoke non macro code gen system?

Unit tests can also verify macros work.

It seems your point is something closer to how macros are too hard for people to maintain. That’d be more interesting of a point to make (and details for you to provide if you have any.)


> Again the same point applies. Either you become fan of low level languages with little-no time saving features, or you are willing to use helpful high level language features (or add your own).

Again, the same point applies. In my experience, macros aren't a time-saving feature in the long run, as compared to alternatives.

Macros are a big time-saver up-front. Obviously. But they make code harder to understand, and in the long run, understanding code is the most important thing in most codebases.

There are exceptions, but those exceptions tend to be the ones already implemented by you Lisp interpreter. Yes, I understand that many Lisp structures are macros rather than core language features.

And I think that if you actually look at the implementation of those macros in production-quality Lisp interpreters, you'll discover that some of them are significantly more complicated than you thought they were, because they're handling edge cases you didn't think of. The benefit of having macros battle-tested by thousands of users in disparate use-cases is that it susses out all those edge cases. You and I aren't going to have thousands of people writing code against our macros in most cases: it's just us and our teammates using the macros, and as a result, we get to find those edge cases ourselves, i.e. we have bugs.

> It seems your point is something closer to how macros are too hard for people to maintain.

Yes.

> That’d be more interesting of a point to make (and details for you to provide if you have any.)

I'm not sure what details I could provide which I haven't already provided.


> This post persuaded me to not take much of a look at Janet.

That's a curious statement, given that the stuff discussed doesn't exist in Janet.


I think you're making very reasonable points in this thread :)

> This post persuaded me to not take much of a look at Janet.

I feel I should say that you shouldn't form too much of an opinion about Janet just because one person used it to prototype a weird macro system. This concept is not, like, part of Janet or anything -- Janet macros are basically Common Lisp's, but with an elegant solution to the function hygiene problem -- and I don't think that I'm even a very representative Janet user (as a big macro fan). Janet is a very nice Lua and Perl alternative even if you never use it to write a single macro! Its text parsing facilities alone are worth a look.

> The problem Lisp macros solve is "this code is more verbose/ugly/boilerplate-y/etc. than I want it to be", which just isn't the problem you're writing the program to solve.

I think this misses what I see as "the point" of macros, which is to be able to make a tiny language core. Consider "and:" I would be sad to program in a language without a short-circuiting "and." So most languages special-case that, right? But lisps don't. Macros mean that you don't have to make "and" a built-in part of the language. Or, I dunno, "defn." "for." Janet's only iteration primitive is "while," and then standard library uses macros to implement for, each, list comprehensions, etc.

And I feel like it's totally fair to not care about that all. After all, why does it matter to you, the programmer, whether "and" is special-cased in the language what is implemented as a macro in "user space?"

> Whenever you reach for a macro, there's another tool you could be reaching for to solve the actual problem at hand. At the very least, you can just write the code the macro would expand into. There's inherently never a case where the macro is the only way to solve the problem.

Of course! But I feel like this doesn't make a compelling point against macros to me, because you could say exactly the same thing about first-class functions, or generic types, or any other language feature. And people have!

> If you're good at writing macros, you won't always get burned by them, but nobody is ever perfect at writing macros, so everyone gets burned sometimes. If you're writing software that you actually need to work, the risk is rarely worth it.

This is, I think, a very good point. Which is why lots of languages (Racket, Scheme, Clojure) have macro systems that make it almost impossible to write macros that "burn you," if I understand your meaning correctly. But others don't! Janet is a bit weird in that it's a recent language that does not have a hygienic-by-default macro system.

> When I have written production code in a Lisp (mostly Clojure) I've rarely reached for macros, and often bugfixes have been removing a macro that was of the "so preoccupied with whether or not they could, that they didn't stop to think if they should" variety. And if you spend enough time avoiding and removing macros, you start to wonder why you're destroying your eyesight trying to match parentheses, when the entire reason for the parentheses is to enable something you have to avoid and remove.

I can certainly see how such an experience would sour you on macros. I am fortunate that I have never had to maintain buggy legacy macros -- sounds awful. I would again point out that you could have a similar experience with... inheritance, async/await, static type checking, I dunno. Anything applied poorly can seem like a terrible idea. You could write off entire languages if you only saw terribly written code in that language. But macros applied well can be great!

When I think of all the language features that have been added to, say, JavaScript over the last twenty years, and how many of them could have been written as macros, in a way that works for all browsers, without a need for something like Babel... it's a little silly that JavaScript developers had to wait for async/await to become an official language feature, when Clojure just implemented it as a library.

Counterpoint: it's reasonable to argue that it's bad to add language features to a language, because it makes your code harder to understand for the median developer. But of course the lack of macros doesn't stop language fragmentation, it just relegates that to a smaller group of programmers who have the time or inclination to write full parsers and compilers -- see JSX, Svelte... Clojure itself! Or Kotlin, or any other JVM language.

(Not that macros prevent fragmentation. If anything, the fact that macros make it so easy to implement a lisp is why we have so many different lisp implementations...)

> And don't get me wrong: macros are cool. "Because I like them" is a totally valid reason to write macros and Lisp.

Yeah :)

I wanted to respond to something else you said elsewhere:

> As a developer I don't have the resources to design and test my code to the extent that the language creators do.

I think that blurring the line between "code users can write" and "code language authors can write" is the point! To give programmers the resources to design and test code on the same level as language implementors. (Which, again: why should I care?)


> I feel I should say that you shouldn't form too much of an opinion about Janet just because one person used it to prototype a weird macro system. This concept is not, like, part of Janet or anything -- Janet macros are basically Common Lisp's, but with an elegant solution to the function hygiene problem -- and I don't think that I'm even a very representative Janet user (as a big macro fan). Janet is a very nice Lua and Perl alternative even if you never use it to write a single macro! Its text parsing facilities alone are worth a look.

Sure, the alternative to hygenic macros was what originally made me curious about that, but now I've learned enough about that to satisfy my interest. However, I'll retract my statement that I'm not going to look into it further, because I had forgotten I was also interested in how they get it to embed in C programs.

> Of course! But I feel like this doesn't make a compelling point against macros to me, because you could say exactly the same thing about first-class functions, or generic types, or any other language feature. And people have!

Right, but the complete argument isn't "there are other ways to do this", the argument is "there are other ways to do this, and nearly every one of them is less error-prone".

> When I think of all the language features that have been added to, say, JavaScript over the last twenty years, and how many of them could have been written as macros, in a way that works for all browsers, without a need for something like Babel... it's a little silly that JavaScript developers had to wait for async/await to become an official language feature, when Clojure just implemented it as a library.

I'll point out that JavaScript already had async callbacks and promises (implemented as a library) when async/await was added as a third way to do basically the same thing, resulting in JS codebases that now have half-baked glue to make the three ways work together. I'm not sure anybody was waiting for async/await to do anything that couldn't already be done, and the churn of reimplementing working code to use a new feature didn't do anyone much good. But that's sort of a tangent.

> Counterpoint: it's reasonable to argue that it's bad to add language features to a language, because it makes your code harder to understand for the median developer. But of course the lack of macros doesn't stop language fragmentation, it just relegates that to a smaller group of programmers who have the time or inclination to write full parsers and compilers -- see JSX, Svelte... Clojure itself! Or Kotlin, or any other JVM language.

Language fragmentation is a problem, but it's not the problem I'm talking about.

If I write code in a popular programming language such as Python, JavaScript, C++, Clojure (sans macros), etc., I create bugs, and I can be reasonably certain that those are my fault. I've been writing C longer than anything, and I've never found a bug in GCC or Clang in over 20 years (okay, there are a few that were retroactively declared features and forever-supported, but that's a separate issue). As The Pragmatic Programmer says, "Select isn't broken", and Coding Horror says, "It's always your fault"[1]. It's not necessarily rare for a popular programming language to have bugs, but it's extremely rare that you'll be the first one to find them.

The same is true for popular libraries and whatnot that ship with the language, which is why I'm not particularly concerned about "defn" or "for" are macros. Those are macros in most implementations, I'm aware, but they're really-well-tested macros because pretty much every Lisp developer to ever write a significant amount of Lisp has tested them. "defn" and "for" aren't broken.

If I write code in the half-baked DSL written by Bob two cubicles over using macros. That code definitely has bugs, and it's very likely I'll be the first to find them. Not to hate on Bob too much: if I wrote macros they'd have bugs too.

And sure, as you've said, Bob can write buggy functions too. The difference is, functions and I have really good boundaries. When I call a function it doesn't touch my code, and I don't touch it's code, and the expressions I pass into Bob's functions only get executed once, and the stack traces all have very understandable corresponding line numbers, and most of the time it's very easy to figure out if the bug is in Bob's function or my code calling Bob's function. And if it's in Bob's code I write a unit test and fix it, and if I'm feeling cheeky I send him a screenshot, and if it's in my code I fix it and git rebase my mistake out of existence to hide my shame.

Macros don't have those boundaries. The expression I pass as an argument to a macro might get called once, twice, ten times, or not at all, with any side effects of that occurring each time. Symbols might get leaked. If you pass (+ (* m x) b) into a macro it can do stuff like flatten a parent s-expression too far make that into (+ * m x b) and even that simple issue can be hard to debug because the line numbers get split up so you have to figure out what's going on. So you can't really tell whether the problem is in the macro or the code calling the macro. And you don't even know when you have to be careful about this, because it's not always obvious whether the code you're calling even is a macro.

Hygenic macros do help, but they don't eliminate all of these problems.

And half the time when I git blame, it wasn't even Bob who wrote the macro, it was me, five years ago. Ain't that embarrassing.

> I think that blurring the line between "code users can write" and "code language authors can write" is the point! To give programmers the resources to design and test code on the same level as language implementors. (Which, again: why should I care?)

And that's my point: macros don't give you thousands of programmers to test your language you made out of macros. So that's why I care whether it was me or Bob or the Common Lisp team who implemented the language: when the Common Lisp team implements the language it doesn't matter if they use macros or assembly because thousands of people will run the code before I even get a chance and they'll suss out the vast majority of the bugs and issues before I have to deal with them. When me or Bob implements the language, it's me, Bob, or the intern who has to suffer the consequences.

Codebases that use extensive macros to create DSLs eventually become write-only. The power and readability you see in toy examples and in the short run in your own code, rarely plays out in the long run, and when it does play out it's because of extensive testing and work--a lot more work than Bob and I have the bandwidth for.

[1] https://blog.codinghorror.com/the-first-rule-of-programming-...


> the expressions I pass into Bob's functions only get executed once

Sure not in functional programming languages, where one can pass functions or data structures which store functions. The code passed gets executed an arbitrary number of times and in different contexts.

> macros don't give you thousands of programmers to test your language you made out of macros

One does not need thousands of programmers. That's misguided. Macros can also be tested and used in any size of development context.


I'm not going to engage further with you if you continue to approach this as a religious zealot who will do anything to defend your precious macros, including:

1. Quoting me out of context (literally not even whole sentences) and ignoring anything I say that you don't have a convenient, shallow response to.

2. Assuming I have absolutely no knowledge of Lisp basics like higher-order functions. Yes, I'm aware of higher-order functions, and that's an exception which doesn't fundamentally change my point.

3. Posting irrelevant code snippets with no explanation.

It's quite possible for two people to have different experiences that lead them to have two different beliefs. My experience is that macros don't turn out to be a good tradeoff in the long run in most cases. I'm sure you have some experience that leads you to believe that macros are the second coming of Buddha or whatever, but that's not my experience. And given it's just a feature of a language which isn't that widely used, this isn't some life or death situation where either of our opinions are some sort of moral failing. The stakes are not that high and you can afford to be kind and thoughtful.

If you're willing to approach the discussion in that way I'll be happy to discuss this with you further. Otherwise this will be my last response to any of your posts.


> Right, but the complete argument isn't "there are other ways to do this", the argument is "there are other ways to do this, and nearly every one of them is less error-prone".

I feel like there's such a wide variety of things that you could be trying to do with macros that this definitely true a lot of the time.

> The same is true for popular libraries and whatnot that ship with the language, which is why I'm not particularly concerned about "defn" or "for" are macros. Those are macros in most implementations, I'm aware, but they're really-well-tested macros because pretty much every Lisp developer to ever write a significant amount of Lisp has tested them. "defn" and "for" aren't broken.

So I'd argue for this differently: defn and for and other pervasive macros aren't safe because they're well-tested, they're safe because they're trivial. You know that (defn ...) is short for (def (fn ...)). You know exactly the code that that macro expands to. And you can choose to type (def (fn ...)), or you can choose to type (defn ...). Same with short-circuiting "and", or "for", or "+=", or whatever.

It sounds like you're mostly talking about complex, hairy macros -- macros where you don't know exactly what code they expand to. But I dunno, the fact that macros can create monstrosities doesn't mean that you should have to type (def (fn ...)). It's a feature with an extremely broad scope, and can definitely be mis-used.

> I'll point out that JavaScript already had async callbacks and promises (implemented as a library) when async/await was added as a third way to do basically the same thing, resulting in JS codebases that now have half-baked glue to make the three ways work together. I'm not sure anybody was waiting for async/await to do anything that couldn't already be done, and the churn of reimplementing working code to use a new feature didn't do anyone much good. But that's sort of a tangent.

This is a very good point. Destructuring assignment, maybe? Arrow functions? I think we agree that some language features are good additions, even if I picked a bad example :)

> Codebases that use extensive macros to create DSLs eventually become write-only. The power and readability you see in toy examples and in the short run in your own code, rarely plays out in the long run, and when it does play out it's because of extensive testing and work--a lot more work than Bob and I have the bandwidth for.

I dunno! I can certainly see how an extensive macro-based DSL could degrade to something write-only.

I don't know when the long run starts, but my experience with macros is that they're just a huge productivity benefit that I wouldn't want to give up. I haven't seen a codebase degrade into an idiosyncratic mess -- perhaps, in part, because there's a lot more friction to writing macros in OCaml than in Clojure, so macros are only used where they provide a substantial and obvious benefit. Or perhaps it's that typechecking makes it much harder to write poorly-behaved macros. Or perhaps I just got lucky with my cubicle assignment :)


> So I'd argue for this differently: defn and for and other pervasive macros aren't safe because they're well-tested, they're safe because they're trivial. You know that (defn ...) is short for (def (fn ...)). You know exactly the code that that macro expands to. And you can choose to type (def (fn ...)), or you can choose to type (defn ...). Same with short-circuiting "and", or "for", or "+=", or whatever.

That's probably true of a lot of macros, but I would be surprised if the macros included in Lisp are all of that triviality.

And the biggest perceived payoff of macros developed in user-space is going to be the macros that aren't trivial.

> This is a very good point. Destructuring assignment, maybe? Arrow functions? I think we agree that some language features are good additions, even if I picked a bad example :)

Sure, I see your bigger point, and arrow functions are a good example of it. I mean, obviously you can do `function(foo) { ...; return bar; }` and then use bind() to fix the fact that that doesn't close around the right things at all, but arrow functions are just so much clearly less error prone that there's no real argument they're an improvement.

I'm actually not aware of destructuring assignment being available in JS, but it's great from Erlang, so I'll have to look into that.

This is something I think about a lot because I'm writing a compiler/interpreter for my own programming language: programming languages often introduce too many features before they're really thought out well enough--meanwhile lots of those features would be just fine as libraries. The result is an inconsistent language with lots of ways to do the same thing that don't play well together. C++ has this problem so badly that they've developed toolsets for subsetting--i.e. choosing the set of features of the language you use and returning errors if you use features outside that set. And then there's stuff like C which just solves that problem by rarely adding features.

The most powerful languages, I think, tend to be ones that didn't add a ton of features, and made good choices on the features they did add. For one example, I think Erlang got their threading features really, really right, and a lot of other programming languages are going to regret going with other threading models and bolting on an Erlang-style model later.

> I don't know when the long run starts, but my experience with macros is that they're just a huge productivity benefit that I wouldn't want to give up. I haven't seen a codebase degrade into an idiosyncratic mess -- perhaps, in part, because there's a lot more friction to writing macros in OCaml than in Clojure, so macros are only used where they provide a substantial and obvious benefit. Or perhaps it's that typechecking makes it much harder to write poorly-behaved macros. Or perhaps I just got lucky with my cubicle assignment :)

I dunno! I haven't written much OCaml, though I've written some F# which is supposed to be pretty similar. I do think making powerful-but-dangerous features harder to use can be a good strategy for dissuading their use except when they're really needed, so you might be on to something there.


There's a stronger abstraction than "generalized macros" and the author of this article missed it.


enlighten us please. I'm really curious.


Maybe it's Racket grammars? (aka syntax-rules)


The author is reaching for a code walker.


care to enlighten us?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: