Re: Resolved: NOT (w/ informal proof). --Was Re: Static vs. Dynamic typing (big advantage or not)---WAS: c.programming:OOP and memory management
From: cstb (jas_at_cruzio.com)
Date: Wed, 25 Aug 2004 00:39:03 -0700 To: "Dmitry A. Kazakov" <firstname.lastname@example.org>
"Dmitry A. Kazakov" wrote:
> On Mon, 23 Aug 2004 18:24:57 -0700, cstb wrote:
> > In OO, it is (or should be) sensible to refer to a particular
> > set of messages as a "type", because our only available operations
> > are "assignment" and the sending a message to an object.
> That is type interface.
If you like. I call it "protocol".
> Assignment is no different from any other "message".
Fine. Then all we can do is send a message,
the selector for which is an element of some protocol.
> All these are primitive operations defined on the type. No need
> to mention objects, BTW.
Yes, there is a need to mention objects.
Objects are instances of a class.
"Types", a.k.a. "protocols", a.k.a. "interfaces"
are named sets of method selectors.
But if what you mean is that you could define a similar
notion of type and also apply it to non-objects
(e.g. a set of function names), then sure - you could do that.
> > The details
> > of how code actually operates on memory locations "owned" by the
> > receiving object are completely determined by the receiving object
> > itself - memory layout and the intended interpretation of bits is
> > encapsulated within the object.
> Wrong. It is defined by *the* type.
Beg your pardon. These details cannot be considered as "defined"
by the notion of type discussed herein, because said notion
is just a set of symbolic selectors (or method names, or
function names, if you like). I mean that quite literally:
A set (no duplicates) of symbolicNames.
> > The only possible "error" we can make,
> > when referring to an object, is that of sending a message which the
> > object does not understand.
> In a statically typed language you cannot do that error, because you are
> not allowed to apply *wrong* interfaces. It is a contract violation.
Right. And the point of my argument is that it is *incorrect*
(as in 'unnecessarily limited') to claim that sending a message
to an object is a-priori an error whenever the receiving object
does not have a statically bound method implementing that message.
Now - do you have an implementation of such a method?
No. But you could still accomplish the task, right?
( You won't of course. But the point is, you *could*.
You could intercept the method call, and replace
it with something that would achieve the intended result.
In this hypothetical case, one presumes you would
construct the replacement solution "dynamically",
and then activate it. But one could imagine another
scenario - in which you knew I would be sending
a message, but did not know what I would name it.
You might implement a method>>doesNotUnderstand: aMessage
in which you accomplish what you know I will intend,
and when I actually do send the above message,
your DNU method sends me twenty bucks.
Yes, I know, this is anathema to a statically bound thinker,
but if you let your imagine go, I *know* you will find ways
to utilize such a thing, ways that do not seem wrong at all
even though this example probably does. Just consider it
for a while. Think about ways to use a proxy. Think about
places where the actors involved in the proxy are really 100%
general - the code must work for *any* type.
> > And yet even then - it is the object
> > itself that decides what to do about the "error". It may actually
> > forward the "erroneous" message to some other object (without ever
> > needing to have an implementation of a method for that message).
> > We can not know whether a message is erroneous when sent
> > - it is not our business to know - only the receiver can know that.
No, just a different perspective than the one you're used to.
> Contracts are imposed on *both* sides equally. You are trying to
> treat bugs and errors as if they were same things.
No, that isn't what I am trying to do.
I am trying to describe "complete and total encapsulation",
what you could infer from actually being that strict,
and how applying the constraints of such inferences
might result in something valuable.
In a way that you will be able to get hold of,
even though you don't much care for it yet.
> They are not. There is no need to handle bugs in a program, it is the
> compiler's business. That's the difference between dynamically and
> statically typed languages. Latter treat a wider class of faults as bugs.
Yes! I absolutely agree -
except you'll need to replace that word "faults"
with the word "circumstances".
Statically typed languages make perfect sense
if there really is no possible way to change any of those
circumstances, so that they are no longer "faults".
It turns out, there are ways to do that.
And the result is *more power*.
I do not mean "more freedom, of the sort that the undisciplined
will enjoy" but which only lead to*more undetected bugs*.
I mean "more power", with no loss of rigor, and with
a greater ability to specify, reason about, and enforce
constraints, in the DBC sense, if you like.
> > Finally, since two completely distinct objects, of entirely
> > different classes in entirely different hierarchies can in fact
> > react properly, and consistently to the same message, "type" can
> > really only be associated with "the set of messages which might
> > be sent to a particular *parameter* of a message, i.e. a protocol.
> Are talking about different implementations of the *same* type or about
> incidental similarities between *different* types?
Ah - this depends entirely on the definition of "type".
Remember, mine is "a set of symbolic method names".
Therefore, I mean: "the same type".
If we used a different definition of type (type = class),
then I would appear to speaking of incidental similarities
between different types. But I'm not. It just looks that
way if you peer at it through the wrong definition of "type".
> > Associating "type" with the class of an object is, quite literally,
> > a "gross approximation" of the set of messages which might be sent
> > to a particular *parameter* of a message. This notion of type
> > is sufficient, because the actual protocol utilized is *always*
> > a subset of the set which is synonymous with a "class". But this
> > notion is *not* necessary, and is, in general, an over specification,
> > which needlessly constrains a parameter to "class", rather than
> > to the (usually much smaller) actual protocol relied on.
> Smaller protocol = Constrained type = *other* type. Do it *explicitly*.
You can if you choose - but you give up power, literally.
Try setting your point of view aside, and "suspend your disbelief"
just for a few moments. Reread my earlier post "in its entirety",
try to find a way to look at it which is meaningful, and consistent.
I know your way - I know the benefit of "A discipline of programming".
But the usual mapping from that to Objects is erroneous - err, incomplete.
Yes, Ada is a far, far better mapping than C++ or Java, but still not quite
Too much of procedural influence remains extant. Just look at the BNF -
the difference in complexity is a strong suggestion that something has gone
terribly right in Smalltalk.
Start there - Objects done right. Now, add contracts back in, slowly,
as you figure out how to do it, but without breaking anything else.
(See e.g. "Strongtalk", as one way to do it. "Protocols" is another).