Re: Know any good OOA/D book

On Nov 24, 12:32 pm, topmind <topm...@xxxxxxxxxxxxxxxx> wrote:
johnzabro...@xxxxxxxxx wrote:
On Nov 23, 1:08 am, topmind <topm...@xxxxxxxxxxxxxxxx> wrote:
You may be interested in this: my table-oriented version of Robert
Martin's payroll example:


However, you seem to misunderstand my motivation slightly.  Perhaps as
a result, you are tailoring your message incorrectly.

Regardless of the approach I use, for large-scale software, I need to
stand on the shoulders of giants.

One trick is to *not* make large-scale software. Break it up into
pieces. RDBMS are very helpful for doing such. The RDBMS acts like the
Nile while the smaller apps are like villages on the Nile's edge.

I won't claim that all domains are subject to this, but most biz apps
I've seen are. Some tend to make big monolithic apps for political
reasons or habit, not out of sound thinking. App size generally does
not scale linearly as far as maintenance labor needed for it. Thus,
you usually save more by keeping the apps fairly small. Sharing
between the teams would be by mutual convention, not out of a forced

Non-technical reasons dominate technical reasons.

Large applications are often created due to how systems are adopted.
In the words of Phil Greenspun, who is infinitely wiser, wittier and
arguably more successful than Paul Graham, "It is cheaper and better
to adopt a false religion than remain a skeptical atheist, seeking
after truth oneself." Most capital-A Architects at Fortune x00
companies make their decisions based on their personal experiences,
often reflective of the common practices at other Fortune x00

Furthermore, the in-house technical staff salaries at these companies
is dwarfed by other departments' salaries, such as marketing and
sales. Such payroll structure makes it cheap to build a system that
is built solely using layers as the arches in the architecture, and
the overall system is therefore highly structured often at the cost of
decreased dynamic behavior. If you look at these systems, their
application of dynamic behavior is limited to a narrow set of "cross-
cutting concerns" that are "weaved" into the program logic. Examples
include security, business rules validation and logging. These are
referred to as aspects, and discussed in industry as "separate
concerns" apart from the "business application server" and its core
tasks. However, often they're the most object-oriented parts of the
system. What is sad is that without these aspects as arches in their
architecture, there would be a lot of duplicate code and a lot of
business logic mixed in with other concerns. Aspects save them from
the tyranny of their layers by providing some geodesic shunts to prop
the otherwise monolithic structure up.

Even so, size is not the best indicator of quality. It's not safe to
say that larger is better or smaller is better, because what really
matters is whether parameters are hardcoded and whether the data model
is easy to change. It's unfortunate that many books discuss
maintenance and basic ideas like coupling and cohesion, but few
discuss how to design data models resilient to evolution. UK
Researcher P.M.D Gray is a noted exception. His research has focused
almost entirely on how to design systems with evolution in mind, and
he doesn't shy away from the issues evolved in integrating legacy
systems like some academics do.

I need to be able to read large source code bases, derive their
strengths and weaknesses at an architectural level, and figure out how
to use existing code as "source code capital" to feed the investment
in my project.

Another desire of mine is to make sense of messes, as a consultant,
and clean it up -- quickly.

I've learned reading a lot of prose, such as yours or even Robert
Martin's, doesn't help me nearly as much as understanding
architecturally either what went wrong or why, despite a programmer
making every right decision every step of the way, I still can't use
his solution.  A good example of this is the impedance mismatch
between batch compilers (which are optimized for throughput) and
interactive compilers (which are optimized for responsiveness to real-
time changes in the source code, such as a user typing inside an
IDE).  A batch compiler provides no hooks for interactivity, because
doing so would be negligent design and compromise the quality of the
product, which is primarily based upon speed.  As a self-professed
"procedural/relational" programmer, this should be eerily familiar
shortcoming to you: SQL stored procedures tend to do a poor job
reporting error messages, even on development SQL-based DBMS servers.
This is because the stack frame overhead for stored procedures is
significant enough that overhead for error reporting in generating the
stack frame would reduce throughput.  A Pragmatic Programmer would
write an extension to SQLServer PowerShell that, on error, for a given
command where batch abort switch is specified, will automatically
launch Google's Search API to return the URIs and previews of the top
N results, fed into a bidirectional pager.  As he/she diagnoses the
errors, he adds to a persistent hash table notes on how to debug
similar error messages in the future.  It is not perfect, but it is
better than stumbling around in the dark.

As an aside, I enjoy reading or hearing Foxpro programmers talk about
"their way" of doing things.  But I'd be remiss if I put it into
practice, because it is not nearly automated enough for me at the
architectural level.  It's still too disciplined and relies too much
on programmer skill to organize complexity.

I doubt there is any way NOT to rely on skill and experience to get a
good system. It's a minimum requirement. Messes can and are made in
OOP all the time. What other profession do you get the best without
having the best at the helm? (other than cutting-edge breakthroughs
that require too much risk for the established to stomach.)

Non-technical problems dominate technical problems.

Most industries that are failing today are doing so because the
employee culture has not changed to reflect the changes in the real
world. The auto industry is a good example. 30 years ago, if you
tried to start a company and planned to model your benefits package
after GM's, then there would be no way you could survive.
Furthermore, Toyota crushed GM with superior management rather than
technical work force.

Dr. W. Edwards Deming, the American consultant who taught Japanese
companies like Toyota about quality, believed most companies were very
poor at human resource management for a variety of reasons. One of
Deming's pet peeves was management evaluating worker performance
without monitoring the performance of the system. Since the system
can only be improved by management, Deming despised managers who
focused solely on evaluating workers, because it actually erodes
quality by questioning workers' performance even when workers are
performing within the system's natural statistically random
variations. Some common effects are: (1) Workers can begin to
compensate in ways that actually introduce more defects into the
system. (2) Worker morale is lowered, and the management sees this and
institutes moral improvement tactics instead of strategies to fix the

I am a toolsmith, and have a deep and abiding affection for great

I would never hand an average developer a chisel and after the project
ask, "Why aren't you Michaelangelo?" Instead, I would build tools for
them to use that compensate for their weaknesses. Often, their
weaknesses tend to actually be the system's weaknesses as well. Once
I address the problem at the system level, then I've effectively
lowered the entry barrier for new, average developers. Moreover, as
the system allows developers to become more efficient than developers
at other companies, I can spend more time mentoring these average
developers and turn them into superstars. Contrast this with the
general industry perception that it is exceedingly difficult to find
good junior developers.

We recently hired a chemical engineer who resigned from a big
pharmaceutical company, and wanted to pursue a different career path.
We prefer smart people above all else, especially people with fire in
their belly. After two months of training and addressing simple
tasks, he is helping to improve the system as well. Typically, it is
hard to hire experienced technical people, because programming is a
young man's game. The average career in high tech is 6 years. Most
programmers either retreat into management, become consultants for
better pay and flexible hours, or change careers entirely. A software
engineer in high-tech with 15 to 20 years of experience is rare, and
usually it is big technical companies who appreciate them most. In my
opinion, you need a good system, not good worker performance
evaluations, to fight this market place pressure.

Paul Graham has suggested that the purpose of OOP is to keep mediocre
bored drooling programmers from screwing up too many things by herding
them with bureaucratic interfaces and scaffolding. Maybe OO is indeed
better at this. I wouldn't really know because I leave such
organizations when I get a chance.

I would prefer to learn from Paul Graham by reading his code instead
of his opinions on culture.

There is a myth in our profession that programmer's share the same
psychology. It would be more apt to say that some people just
practice scientific thought and others are missing it. College is
supposed to teach people the role of scientific thought, but as
Richard Feynman notes, some do not pick up on the lessons because
they're not explicitly stated and it is hoped people will learn it by
example. However, within a scientific community, there is no shared
psychology. Shared psychology in science would be "consensus
science", which isn't science at all. The power of science is that it
only takes one person with the right idea to be right, and all the
others can be wrong despite their consensus.

I do use a data dictionary, but my systems are still object-oriented.
I consider a data dictionary to be orthogonal to object-orientation.
In fact, if you look at the "cutting edge" programming languages, then
you'll see they effectively use some form of a data dictionary, too.
For instance, Clojure's approach to managing identity and state and
therefore managing concurrency, uses values and change.

A data dictionary allows me to model my objects in a very agile way,
and makes refactoring my object model much easier.  However, AND THIS
IS VERY IMPORTANT, I define a set of architectural constraints that
allow me to specify conformance to OO.  For me, OO is just a synonym
for "well-designed software following a specific set of architectural
constraints".  I've developed these constraints over years of
developing software, reading about software, studying others' code,
and generally being interested in being far more productive than the
average programmer.  I consider it the programming equivalent of a
Personal Efficiency Program -- not perfect, but guaranteed to get your
mind off the wrong things and focused on Getting Things Done Now and
in the Future.

Code example?

That would be breaking NDA.

I'm scholarly and also humbly willing to admit I suck at engineering
and constantly want to do better. I usually tell strangers I'm not
even a programmer, and that they made me a janitor after "the incident
with the laptop at the water cooler".

Most of my ideas are aped from a wide area of disciplines within
computer science and software engineering.

I feel most systems have poor *integration of concerns*, in part
because they use methodologies that focus on process instead of
tools. Most object development methodologies presume you will
primarily use layers as the arches in your architecture. However,
layers are a poor way to architect systems, because they emphasize
static structure over dynamic behavior. Object technology intends to
allow developers to reconcile a design paradox: stability in the fact
of evolving requirements. The spirit of object-oriented computing is

I think you'll agree a data dictionary supports evolution well. A
data dictionary is also a tool, instead of a process. Unfortunately,
the practice of building a data dictionary is a technique that began
disappearing from the programmer's tool belt around the same time
OOPSLA started. I own four books on building data dictionaries,
though, and they compliment my object technology bookshelf well.

There are tools in addition to data dictionaries for managing
complexity, but they are also underutilized.

By the way, deprecate your notes on Control Tables -- they are a
throwback to Process Activation Tables in the 1960s.  Jonathan
Edwards, a research fellow at MIT, presented a much better approach at
OOPSLA two years ago: Schematic Tables.
Most architectures underwhelm me with their support for specifying
decisions and behavior substitution.  Schematic Tables don't.  Your
"Control Tables" is "not even wrong", but understandable given you
probably wrote it awhile ago and just haven't kept up with advances in
object technology.  Keep it around for historical keepsake, but Mark
As Deprecated.

In practice I do not rely on direct control tables that much. They
have their place for certain apps, but one should not get carried

Where most cutting edge techniques are underwhelming me, and where
most prior technology has underwhelmed me, is the ability to create
comments that match the specification.  The only decent way I've found
to do this is through DSLs that are effectively data dictionaries, but
it is a costly process to develop them.

Perhaps if tools made creating such easier. If table-oriented
programming became popular, then more table-friendly tools would
exist. Look at all the effort and complexity put into the big bloated
Object-Relational mappers. Imagine if all that effort was re-focused.

Frans Bouma recently asked Microsoft's Connected Systems Division
people how they were going to make creating DSLs easier. Clemens
Szypeski, of Component-Oriented Programming celebrity, basically
stated that Microsoft doesn't know how t o solve this problem:
"Something better, like 'define DSLs by example' could likely be build
on top, once we understand how to do that in the first place."
Clemens also says that the Oslo framework for building DSLs depends on
the developers ability to "know how to think about and design DSLs
that are essentially data transformers" and that Microsoft will "aim
to offer a smooth story for how to do that". Oslo doesn't even allow
you to develop a general purpose programming language. Oslo is simply
a data transformer. Yet, even with the limited data transformer
language model, Microsoft can't come up with a good scheme for allows
mere mortal developers to specify their own DSLs. See:

Jim Coplien has said it best: Based on current research in development
techniques, expecting DSLs to be the future industry-wide is a
misplaced faith. It's been tried before and its failed. The basic
problem, Coplien observes, is that you are taking average developers
and making them language designers, compiler writers and toolsmiths.

I do use a DSL to abstract away T-SQL. The DSL is partially visual
and partially textual, and constantly evolving. However, the DSL
itself is supported by lots of tools. I am a toolsmith. Writing
tools does not bother me. In fact, I'm great at building tools to
automate tasks. However, approximately 50% of the world's programming
workforce is untrained in formal methods of any kind.

Also, I do believe there is no greater competitive advantage you can
have in an industry than a tool that fits your business model like a
glove. The biggest disappointment as a toolsmith is building great
but proprietary tools and having to leave them behind to take a better
job offer salary-wise and risk losing all the productivity shielding
I've surrounded myself with.