Re: Migrating ISAM to Relational Database



On Apr 14, 4:29 pm, "Joel C. Ewing" <jcREMOVEew...@xxxxxxxxxxxx>
wrote:
Pete Dashwood wrote:
"Rick Smith" <ricksm...@xxxxxxx> wrote in message
news:1320k5l1kvroc05@xxxxxxxxxxxxxxxxxxxxx
"Pete Dashwood" <dashw...@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:589a52F2g4c1cU1@xxxxxxxxxxxxxxxxxxxxx
[snip]
This has now been posted... Accessing the following link will reveal 3
documents that are worth reading if you are considering migrating ISAM to
RDB....

http://homepages.ihug.co.nz/~dashwood/dashwood/RDBStuff/

Any or all feedback appreciated.
In 4.ISAM2RDB.doc,

1. Page 3, Dealing with OCCURS (Repeating Groups),
items 1 and 3. You seem to disregard the space savings
that ODO and RECORD VARYING provide.

Yes, that's probably true, although I would have done so unconsciously.

My personal opinion (and it is ONLY that :-)) is that these constructs are
just pointless and useless. Unless COBOL dynamically allocates space (and it
doesn't) the only "saving" that is made with ODO is on external media.
Internally, an ODO definitition always takes the maximum space that it
could. The compiler has to allocate the maximum because it can't dynamically
allocate space at run time.

I don't use this construct, and I discourage others from doing so too. A
relational DB allows "tables" with "infinite" (limiited only by available
disk space, and that gets cheaper every year) dimension, so the external
saving is just unnecessary if you use RDB, anyway.

Never needed it; don't use it. :-)

RECORD VARYING... may have some marginal use and is certainly important when
processing legacy files.

Like so much in this business, it depends.

If ODO saves a significant amount of raw file space to store the data on
external media this can have a number of beneficial effects that go much
beyond the mere cost of your DASD media: (1)savings in processor time,
I/O activity, media, and real time to backup the external data for
Disaster Recovery; (2)savings in cost of disk media at a DR recovery
site (which may be expensive or difficult to increase depending on your
contract); (3)savings in processor time, I/O activity, and real time to
reorganize or rebuild the database;(4)savings in processor time, I/O,
and elapsed time to sequentially access a significant percent of the
database, because more used bytes are transferred with each physical
block read; (5)savings in the number of buffers required (affecting size
of working set and real storage requirements) for caching the database
in order to contain the same number of records in cache and get
acceptable response time for random access.

If you are in an environment where you are never constrained by
processor time, real memory, I/O response times, daily batch windows,
DASD availability, or DR costs, then by all means ODO is irrelevant. In
all other cases, one looks for the major resource hogs, or "loved ones"
with poor response times, and do whatever it takes to address the
problem, including use of ODO where appropriate.

We too have had COBOL programmers who hated to deal with variable length
records. But, the marginal extra cost to manage variable length records
within a COBOL program can easily be insignificant when compared with
what is costs to pump unused bytes through the I/O subsystem over and over.

COBOL does not bother to dynamically allocate storage to ODO items at
run time, because with virtual storage there is no significant savings
in allocating COBOL ODO data items at anything less than the max
required. Unused portions of a large array do not contribute to the
working set of the program or the real storage required to execute. In
the z/OS environment, real 4KiB pages wouldn't even be assigned to
portions of a large array until the first reference required it. So
long as you don't do something silly, like initializing the entire array
in advance just in case you might need all of it, then the cost of
unused portions is essentially zero in that environment.

Although it's probable your remarks on ODO were only intended to apply
to record formats used in I/O, I want others reading this to be clear
that there are other cases in COBOL where ODO is the only reasonable way
to go. One case where ODO should ALWAYS be used is for a sorted data
item array with a variable number of items that will be used repeatedly
with a SEARCH ALL. Not only does proper setting of the "depending on"
variable eliminate the need to initialize unused trailing items in the
array, but it guarantees the resulting binary search uses the minimal
number of compares for the search. For arrays whose max size is much
greater than their average usage, failure to use ODO here can have a
significant negative impact on performance.
...

--
Joel C. Ewing, Fort Smith, AR jREMOVEcCAPSew...@xxxxxxx

Joel

I agree with you, though I think that these days file compression on
the fly is also available in some environments and is probably much
more effective at reducing files sizez. I quite agree with your
comments on minimising sort requirements, though am mot so sure about
only initiallzing the parts of a table in use. While I see the
benefits of this there, it also sets a trap for the unwary maintenance
programmer at 3.00 a.m. on a call out, though I can also see that
clear and/or appropriately documented code would minimise the risk.

Have you seen John Piggott's proposal for taking the topic quite a bit
further and now incorporated in the draft standard for the next
revision? It is very similar to the technique used by the Pick O/S,
though its use for files was left aa a possible future enhancement.
Then people would be able to truly talk about COBOL files, as this
format would then only be able to be read by COBOL programs in non-
Pick operating systems, though I suppose suppliers might also write
some utilities for them. I would make reading dumps harder and
interpretive debuggers harder to implement and follow.

It will be of great benefit to programs using massive data structures,
though for general use it would probably add unnecessary complexity.

Robert

.



Relevant Pages

  • Re: J4 - presentation/discussion on "Future of the COBOL Standard"
    ... Belief trumps evidence in the land of Cobol. ... It says the size of the array DEPENDS ... Suppose you're flying a kite and someone asks the size of the tether string. ... This seldom comes up in ODO discussions. ...
    (comp.lang.cobol)
  • Re: Migrating ISAM to Relational Database
    ... you are probably MOST familiar with COBOL on a mainframe (and probably an IBM ... doesn't) the only "saving" that is made with ODO is on external media. ... beyond the mere cost of your DASD media: savings in processor time, I/O ... Unused portions of a large array do not contribute to the working set of the ...
    (comp.lang.cobol)
  • Re: J4 - presentation/discussion on "Future of the COBOL Standard"
    ... Belief trumps evidence in the land of Cobol. ... It says the size of the array DEPENDS ... Then he asks for the length of your tether string. ... This seldom comes up in ODO discussions. ...
    (comp.lang.cobol)
  • Re: J4 - presentation/discussion on "Future of the COBOL Standard"
    ... are you saying these manuals are wrong? ... It says the size of the array DEPENDS ... This seldom comes up in ODO discussions. ... I know of no way in COBOL (apart from CICS and Assembler GETMAIN calls on ...
    (comp.lang.cobol)
  • Re: Migrating ISAM to Relational Database
    ... You seem to disregard the space savings ... doesn't) the only "saving" that is made with ODO is on external media. ... savings in processor time, I/O activity, and real time to ... portions of a large array until the first reference required it. ...
    (comp.lang.cobol)