Re: Yet another 'when to use macros' question....

Majorinc wrote:
In article <1138995027.317422.174060>, kkylheku@xxxxxxxxx says...

The point is that macro expansion is just a "drop in the bucket"
compared to all the other processing that has to be done in order for
Lisp data to go in, and quality machine code to pop out!

Well, you are partly right - but only partly. Although time
required for macro expansion can be quite low, in general case,

When you're optimizing, there is only so much you can do about the
pathological "general case", right?

Just because worst-case general cases can exist, that doesn't mean we
should shy away from arming ourselves with useful tools like macros.

it is not bounded by linear (or in fact, any) function of the
size of code.

Uh, the /execution/ of that code is also not bounded by any function of
the size of the code.

You keep saying that code with functions only is better than macros.

This is your reason?

All other processing you described is linearly
related to the size of code.

I only described the processing vaguely. I would not be so hasty to
jump to the conclusion that compiling code of size N is O(N). Code of
size N could give rise, within the compiler, to a graph structure of
size N, and some algorithms over graph structures of size N are
non-polynomial. Some types of optimizations require the searching of
large spaces.

What you seem to be saying is that you have a great deal of trust in
the compiler being well-written, as well as the macros that come from
the compiler vendor, but that you have less trust in the user-defined
macros. Why? Both of them are just code. Both of them are largely
written in Lisp. Both of them can be compiled (yes macros themselves
compile all the way to native code). Code that has a high computational
complexity can be put into a macro or into a compiler.

Which one are you more likely to have source code for? Which one is
more likely to be under your control? Your macros or the vendor's
macros and compiler?

If you don't trust user-defined macros, why do you trust other
user-defined code?

If you put together some code dynamically, and then you compile it and
run it, or eval it, just once, there is no telling where the most time
will be spent. Will it be spent in macro-expanding? Compiling after
macroexpansion? Or in actually running that code? Who knows?

So you think that because macroexpansion is unnecessary, you can
eliminate one of these --- without changing any other variables in the

There are other reasons why macro expansion takes lot of time.
But, why to speculate - if we can look in horse mouths - in
example I posted, macro expansion of PUSH is responsible for
99% of total time ... and it can be even worse.

Your "non macro" version used SETQ which is also a macro! It is not

Like SETF and PUSH, SETQ is a general-purpose assignment operator that
can assign a new value to any place. It's syntactically restricted to
symbols, but symbols can macro-expand to arbitrary places and SETQ has
to handle that.

E.g, in:

(macrolet (x (cdr y))
(setq x 42))

The (SETQ X 42) does the same job as (SETF (CDR Y) 42). The SETF
expander for the CDR function has to be retrieved and called.

In Lisp, there isn't any clearly delineated set of primitives. What is
primitive and what is not is purely an implementation choice. You can
remove just about anything from Lisp and implement it back as a macro.

The only constraint is that you don't remove so many things together
that what is left cannot implement them back. For instance if you
remove every I/O related function, you can't put back I/O since you
don't have any portable way to get into the operating system calls.

How about LET? You think that binding variables over a block of code is
primitive? If you don't have LET, you can use LAMBDA:

(let ((x 3) (y 4)) (+ x y))

can be rewritten as:

((lambda (x y) (+ x y)) 3 4)

A macro can do this rewriting job.

So if you generate and compile a million functions a minute, you are
invoking this entire compiling process a million times a minute.

Compiling is good only for parts of code that can be compiled
once and executed many times. In the case - compile once -
execute once - it is better to use interpreter.

But which one? The general one built into the language (generate source
and feed it there) or something in your own program? How do you know
it's better? What if that piece of code that is run once contains big
loops and does a lot of processing? Is it always better to interpret

Suppose I have a regular expression that I only want to use once to
find a single instance of a pattern inside some text. The choice isn't
just between EVAL and COMPILE. The choice is between just interpreting
the raw expression (the string), and analyzing that expression and
turning it into Lisp source code. Then the choice for that code is
between EVAL and COMPILE. So there are three main choices: don't
generate code, do generate code but use eval, and do generate code and
compile it.

Note that some Lisps like Corman don't have an interpreter. Everything
is compiled. EVAL compiles machine language, and so EVAL and COMPILE do
the same thing. The tradeoffs between EVAL and COMPILE are specific to
the Lisp implementation and maybe other factors.

Why would you generate a million functions in a minute? Firstly, to
even begin to justify that, these functions would have to be different
from each other.

It is natural for many problems in AI domain. Think about
theorem proving or satisfiability testing that might rely on
understanding of the logical formula as Lisp expressions,
genetic programming ...

If you had logical formulas represented as Lisp expressions, don't you
think you'd have control over the syntax of the operators that are
allowed in them? If they were implemented as macros, wouldn't you be
the one implementing them and making them as efficient as possible, if
that mattered? So couldn't you avoid the general cases where
macro-expansion runs with poorly bounded times?