Re: 3270 emulator?
- From: "Pete Dashwood" <dashwood@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Fri, 24 Jun 2011 16:34:13 +1200
Nomen Nescio wrote:
"Pete Dashwood" <dashwood@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Fritz Wuehler wrote:
"Pete Dashwood" <dashwood@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
It's sad if they would like to move onward but can't.
I guess I don't feel "sad" about bad business decisions ;-) I might
have several emotions but sad isn't one of them!
I feel sad because of the waste.
Waste is an intrinsic part of big business. There's no way around it
it's not necessarily bad.
There ARE ways round it and minimizing it is far better than just accepting
People still got paid for doing the wrong
thing. They learned and didn't make the mistake again or they didn't
learn and got fired or stayed on. Who knows. But in the end money
changed hands, people worked, etc. If that isn't economy in action I
don't know what is.
And, as an IT professional, I don't like to see companies locked in
to solutions for purely technical reasons.
I don't know of a better reason to stay locked in. Marketing?
Don't be locked in at all, for ANY reason. Why would you let the tail wag
the dog? Your business is the dog. Your admin systems are the tail. You
develop or purchase admin systems to meet your needs. In today's technology
this doesn't limit you to a particular platform or language.
In principle your statement is a good one, everything should
be up for consideration whenever you have a new system to design,
but there is also the idea of using what works and not spending
unproductive time trying other angles that turn into blind alleys or
have other issues associated with them.
But if you extend that argument you would never use anything new and
would therefore miss out on the benefits associated with it. :-)
No, you let the other people try the new stuff and you carefully
what they got and lost by doing that.
But under your scenario, if they all took your advice, there would be nobody
trying anything. Fortunately, the pressures of real business lead people to
look for better solutions. However, I agree that your description is
applicable in SOME companies. (And I have worked in some of them)
(You may have overlooked the fact that your competitors are not likely to be
forthcoming on details of their experience with a certain system so you can
benefit from it.)
All of the largest accounts I
know are intentional and careful late adopters.
We move in different worlds. I agree this world exists but it isn't
something I'd want to be part of.
They are in the
business of managing other peoples money and they don't take chances
they can avoid. One of the most important ways they do that is simply
not to run after new technology
for the sake of new.
I don't think anybody does that. The rewards and risks are usually examined
carefully before moving to new technology.
If they can support their business running
backlevel or very backlevel software on backlevel hardware that's
exactly what they
do, on purpose.
I respect their right to do that and I don't think they are "wrong" to do
so. BUT, it's not a world I want to be part of.
It might take 2 or 3 years to upgrade their operations
because of how much testing and signoff is required. It might take 6
or a year to put a single new piece of code into production once it's
been written or purchased. Those companies make careful studies of
the published reports and they are in close contact with their
hardware and software vendors, many of us actually have offices at
customer sites just like IBM
did in the past. It all depends on your risk model so I want to
present another side that people might not be familiar with.
Sometimes there is a risk, but the the risk is usually calcuable,
and the rewards can be balanced against it.
The risk for a big operation is mostly incalculable and has some
important intangibles like name/quality recognition etc which is why
they are so
Just because you are risk averse does not mean that risks cannot be
calculated. It is a simple mathematical fact that they can be. If you don't
WANT to do it, that is your prerogative, but there is a whole branch of
mathematics which deals with it.
Some of them worry more about reputation and loss of
future revenue and client retention than they do about a failure
Companies, in many respects, are a lot like people. (Maybe because they are
comprised of people...) If the "culture" of a company reflects insecurity
and low confidence that is really not the fault of the IT department.
However, if it filters through to IT then the systems produced will be
tentative and fragile.
Many of them know exactly how much every second of
down time costs them (and the numbers are so outrageous I won't even
attempt to post them here) and the money isn't even their biggest
concern. Their reputation
is worth more than their money, and they don't want outages or loss of
service. At all.
I think MOST companies (not just the ones in the world you are describing)
would feel that way.
How they go about achieving that is where we are diverging.
The way they make that happen is a careful balance of
running old code and hardware and a long chain of testing and code
promotion. IBM is making running old hardware harder now than they
did in years gone by. Some of these companies have the muscle to make
IBM bend and some don't.
I agree that is ONE way. I described another one which can also achieve that
I firmly believe that application requirements (now, and as far as
foreseeable, in the future) should drive system design. It should
not be driven by things like: "We are a COBOL shop and you can't do
that in COBOL..." (just as an example). The technology is not
important, beyond the fact that you have certain databases already
existing and these will be utilised by whichever technical approach
you decide to adopt.
Yes and no. There are other factors like what do we have today, how
does it cost us in people, software, and hardware. How much will we
have to pay for different people, software, and hardware to upgrade
to the next
thing, what will it buy us in terms of real dollars, how long will it
to pay for itself, and most of all, what if it doesn't work?
"Most of all"...???! I can honestly say I have never approached a system
implementation with the idea of "What if it doesn't work"? The question of
fallbacks and contingencies is addressed LONG before that. Besides, it is my
job (very often) to make sure that it DOES work. At a programming level,
would you write a program with the idea in your head: "This will never
work"? Why would you waste time? If you KNOW it can never work, don't do it.
If just THINK it MIGHT never work, then fix it so it DOES work. :-)
those are just the tip of the iceberg. I'm glad I don't make those
I'm glad I don't work in places where that is the process.
The trouble with getting programmers to design applications is that
they design with a view to what they know technically.
And we cannot blame us for that ;-) Remember in the old days the
programmers actually sat down with the end-user and they designed the
systems more or
less together? Now big companies just sell you what they have, never
what you want. That's a mistake, but that's the way it is. Even if
they develop in house, the guy writing the code in a big shop seldom
end user. Some system analyst goes to the end user and brings back a
broken system to design since he never actually writes code. We can
that's not the way to work but nobody listens.
I described at some length an alternative process. Having tried the
traditional "standard" (waterfall, SDLC) approaches for years, with somewhat
mixed results, I was happy to try the new approaches. They work better and
deliver more timely solutions to the business. For those reasons, I have
continued using them. It is certainly true that they suit Object Oriented
Development very well but you COULD bend them to fit procedural COBOL as
well. The fundamentals are iteration and interaction rather than a serial
inline process of stages with each stage requiring review and sign off and
then repeating that if it has to be changed. (Or the famous: "We'll do that
in the next Release...").
(How many times in your career have you been told by a programmer:
"That cannot be done." when what he really means is: "I don't know
how to do that.") It alwaysamuses me when you show them the
solution and they then say: "Aw, well, if you're gonna do it like
I have not heard that but as I said I have been away from
applications for several decades. Almost all of what we do hasn't
been done before so "it
can't be done" isn't in our vocabulary. "It can't be done by this
Friday" is something you might hear us say, though ;-)
I snipped it for brevity but I don't see any conflict between
traditional coding methods and tools with the approach you suggest
(focus group like approach) and we have done that at several sites I
have worked with in the past. It's a good method, but only when you
don't have much of a portfolio.
At some point you have products and then the salesmen have to sell
those. Nobody wants to keep developing new products, it's risky
expensive and not guaranteed, while the products you do have just
require enough effort by salesmen and minimal upgrading to retain
existing licenses and keep the
quotas filled on new licenses. Not necessarily the way I would work,
this is the reality in big software companies.
Mine is a very small software company :-) Nevertheless, we have established
considerable credibility and I have never had a dissatisfied customer or
anyone who needed dunning to pay their bill to us. BTW, I LOVE developing
new products and am working on some at the moment... :-)
Sadly, some of it has to go on hold because of the current climate.
Either that or buy more
existing products from another company. New product development is
the most expensive parts of the business and most "software"
companies don't want to deal with that part.
And yet it is also the "fun" part and the lifeblood of the company. When I
first decided to write tools to help me move off COBOL there were many
people who said what I was proposing could not be done. ("You can't
automatically modify COBOL code to move from indexed to Relational
Databases. There are too many possible styles of coding and too many options
that might have been used. It'll never work."... "You'll never be able to
normalise a generated database. The best you can do is refelect a COBOL
record in a table."... "You won't be able to provide a generic object that
can implement every action a programmer might do against an indexed file"...
and so on. It didn't happen overnight (I spent a couple of years on it) but
it DID happen. As other people started using the same tools and I got more
feedback the tools improved. I firmly believe in the evolution of software
(iteration and interaction).
Yep, me too. I like agile style development because I can see if it
works BEFORE I build it.
We're working in different worlds because any talk about agile in the
companies we sell to gets us invited out the door.
Yes, in the mainframe world "agile" has unfortunate connotations for many
people. They equate it with "No specs."... "Written on the fly"... "not
properly debugged"... "seat of the pants programming"... harrumph...
None of the above is entirely true although some of it is partially true
:-). People who are very reactionary don't even want to find out more.
I encountered this in one company in the U.K. but I persuaded them to try a
small, non-critical, pilot (low risk...) on the understanding it would not
take more than 4 months and if it didn't succeed to everyone's satisfaction
I would then manage the development using SDLC for as long as it took, at
half my usual rate. :-)
The business loved it, the programmers loved it and as far as know this is
now their standard development approach. (It was around 15 years ago; they
may have dropped internal development altogether by now.)
They want stable,
and it doesn't matter how old it is or how much it costs. It must
never break or do anything to reduce availability of critical systems.
The late adopters are not the only ones looking for that particular Grail...
There seems to be a popular misconception that only mainframes can
process transactions securely. NETWORKS can also provide
performance, reliability, and serviceabilty, not to mention cost
effective scalability, and locality.
That is all theory and it hasn't been proven, if anything recent
events show quite the opposite.
:-) Just saying it is "all theory" doesn't make it so. I posted an example
of one network I use personally that has achieved 100% uptime over the past
several years. How is that "theory" or "unproven"?
As I said before, modern networks have built in redundancy that guarantees
reliability. You seem to be claiming that ONLY mainframes have reliability.
My point is that that is simply not the case. The fact that there are now
reliable networks doesn't mean that mainframes are any less reliable, it
just means that there are other options which are just as good.
Cloud or network failures have no
limits, they affect
much more work than on a controlled back end environment. We have
seen major failures lately of companies like Amazon for technical or
Really? I didn't notice that. How long were they down for? (I am a frequent
Certainly PUBLIC networks are susceptible to hacking (although this is
becoming more difficult) and there is an argument that if you connect a
remote terminal to ANY computer there is a possibility of the system being
I haven't seen a hack of a mainframe in 4 decades
and I don't expect to see
Beware of smugness. Although the mainframe offers fewer "points of attack"
it is not immune. If you don't want to take my word for it, perhaps IBM can
Note the conclusion in the above.
Can't get a virus on a mainframe?
I would agree that, at least for the moment, mainframes are more resistant
to hacking than a Network. However, efforts to secure private networks are
not standing still...
They don't have to be taken down for "preventive maintenance" they
can automatically bring more resources online as required (load
levelling) and they can be fail safe. (The whole point of ARPANET,
from which the internet developed, was that a nuclear strike could
not bring it down, because the system would find alternative
pathways to get around destroyed servers...)
That design hasn't been proven, nor have any of the failovers worked
without hitches and glitches.
You don't think the US Military destruction tested it before implementing?
Actually whether they did or not is irrelevant, because it HAS been proven.
I saw network recovery at first hand when a digger cut a fibre cable up the
road from me... :-) I was working on my system and never even knew the cable
had been cut until some hours later. It recovered within milliseconds. In
contrast, we had a power failure for a couple of hours caused by a car
hitting a power pole. I guess electricity must be harder to re-route than
Network isn't a silver bullet just like
OO isn't a silver bullet. The only thing that seems to never fail is
a good old mainframe!
And THAT isn't a silver bullet either :-)
If we examine your very interesting last sentence regarding lines of
code and complexity needing to be minimised, COBOL loses again.
Object systems typically contain far fewer lines of code than the
same system developed in COBOL. A while back, just to satisfy my own
curiosity, I did some analysis of my own codebase before and after
COBOL. The COBOL solutions typically required anything from 1.5 to 6
times the coding of their replacements written in C#. This is not a
definitive scientific analysis with proper controls in place but it
was enough to persuade me that writing C# is a lot less effort than
writing COBOL and the code produced is much more powerful. (Powerful
in the sense that it can activate pre-written classes in a single
line to do things that would take thousands of lines in COBOL. Maybe
not a fair comparison but we are talking about minimising the code
I don't know what you're including, but since Java and C# use
substantial libraries, I think you need to factor those in to your
I put it in large capitals... look again.
As a C# programmer I have access to over 100,000 Classes of pre-written,
debugged code. To duplicate this in COBOL would be billions of lines. But
no-one has attempted to duplicate it in COBOL so it simply isn't available
unless you use a COBOL .Net compiler. (The reasons why you wouldn't want to
do that are more than I will go into here)
I never had to write the .Net libraries (yet I use them every day) so they
cannot figure in the lines of code I DO have to write. These are the lines
which must be "maintained".
And if you don't include locally written classes it's
clearly not an apples and apples comparison.
I can write a "local" Class in C# in around 1/3 of the lines it would take
to do it in OO COBOL. But it is a silly argument because the problem is not
with the lines of code and maintaining them (that is a COBOL view, it isn't
relevant in OOP...), but rather with the pardigms that C# and COBOL address.
When it comes to actual
lines of code doing commercial processing I don't think COBOL can be
It HAS been beat. Mine is not the only codebase that shrank considerably
when we threw COBOL out.
since it's so ideal for
COBOL is abysmal for reporting when compared to packages like StimulSoft or
Crystal. Anyone who has ever spent time counting fillers in a COBOL print
line knows there is no comparison between doing this and simply dragging and
dropping the field you want, to the place in the line where you want it...
financial calculations and rounding (despite that other
thread). All those things take quite a bit of effort in other
No they don't. They did once; not any more. There are now decimal classes
available specifically targeted for commercial processing. Do try and keep
Think of all the work picture fields save you, and the
automatic conversion between
packed and binary and display fields etc. No other language does that
as easily, certainly not Java or C#.
I vehemently disagree. There are new facilites in C# v4 which you may not be
familiar with, but evern without those, it is not a problem to convert data
types in C#. (I haven't written Java for a about 6 months so I don't know
what's available there.)
You don't have to cast fields or
call conversion routines in COBOL, you just MOVE one field to
another, and the compiler does the work once, instead of every time
you process a transaction in Java/C#.
I agree that COBOL is good in this regard, but so is C#. Casting is simple
and just part of the syntax. I don't understand your objection to it.
Conversion to display doesn't even require an explicit cast
displayString = compField.ToString();
Perhaps you have noticed it is a diminishing number.
I really haven't, I have seen quite a few new technologies deployed
in big companies, again only for front end processing like GUI and
customer facing work. They still buy and deploy big (and I think very
poorly written) application systems in COBOL and waste/spend hundreds
of millions on it. I'm sure the other products they buy are just as
badly written. True, they've
cut down on the number of in house COBOL coders but I'm not really
sure whether it's a consequence of moving from the old days where
people rarely bought applications to today where virtually everything
is purchased rather than written in house and even many are supported
by the vendor. That's just
a different work model and it doesn't mean COBOL isn't being written.
agree the landscape has changed quite a bit but I don't think there's
one simple reason and I don't think it's just COBOL.
We can agree there is no one simple reason for the decline in COBOL jobs.
I'm not anti-COBOL (quite the contrary; it gave me an excellent
living for many years and is still doing so to a much lesser extent.
I am a strong advocate of bringing COBOL code into the 21st century
(where it is sensible to do so) rather than discarding it. But it
needs to be encapsulated so it can integrate properly and so it
doesn't need lots of maintenance.)
There are some simple ways to do that, but I agree it's not usually
No, my comment was really historical. Today's processors and data
subsytems can usually manage it, but in the early days of the first
online systems (mid to late 1970s), it was common practice for
transaction updates to be deferred to overnight batch so that the
user interface could be maintained in real time. There was no
network and a centralised mainframe had to deal with it all.
Yes, that's true.
Having worked with both CICS and IMS in the early days I agree they
are robust transaction processing systems. But even then, there were
better TP monitor and transaction processors available. (TASKMASTER
and SHADOW were two that I used and found better. TASKMASTER was
simply superb, but it didn't have a blue label on it and the peope
who wrote it had no idea about how to market it. (cf. VHS and
Betamax for videotape...)
I suppose it will sound odd since I have never heard of taskmaster or
shadow but I really can't imagine them (or anything else, really)
being more robust than CICS or IMS today- they're best of breed.
So, not having used the products I described, you dismiss my comments
because you "can't imagine" anything better than what you have? :-)
Does that seem fair to you?
As the products you describe are pretty much the ONLY option on today's
mainframe if you require teleprocessing, it isn't hard to call them "best of
Surely that wasn't always the case but it has been that way since at
least the 1980s. People do need to
buy something with solid support, but knowing how critical
RAS are, nobody would buy Blue if something better was around. It all
comes down to money and time is money.
"Nobody would buy Blue if something better was around..." Sorry, I have to
I grew up in an era when IBM was the only game in town. It would have been
impossible to further my career UNLESS I had IBM experience on my CV, so I
made a point of getting it. But I also made a point of working with other
equipment. Burroughs, Univac, NCR, Control Data Corporation, Honeywell, and
ICL, all before PCs were readily available in the workplace.
I can report unequivocally that several of the companies listed had
"something better" than IBM, yet some of them are no longer trading and
others have merged for survival.
I have seen very competent IT managers almost lose their jobs and have their
competence questioned because they bought something other than IBM. The
argument ran: "If your IT Director can't see that IBM is the best, he must
Hopefully, you have never heard the expression: "Nobody ever got fired for
buying IBM". It's true. Even if the whole thing went pear-shaped, the buyer
could go to the Board and say: "Hey, I bought IBM. What more could I do?" In
the days before PC networks computers were more often selected on the basis
of politics than on the basis of what was best or even "better"...
The fact that a certain product has pre-eminence in the market place does
NOT mean that there isn't "something better". It means the product is
acceptable and it is well marketed and supported.
I should hasten to add that the IBM of today, is NOT the company described
above. They successfully re-invented themselves under much better management
when the whole house of cards came tumbling down after the arrival of PCs
(which, ironically, they invented...)
CICS and IMS are both good products (although I always liked IMS/DC better
than CICS) but they are not as flexible, responsive to dynamic load
variances, or as scalable as a distributed network.
(but no-one from this forum). At least some are people who found our
free COBOL structure analysis tool on the COBOL21 site. (This is the
Tool that Robert Wagner
I loved him in "It Takes a Thief!" I was not aware he was so
multifaceted! Well done!
I wish I DID know what gets them going because things are VERY quiet
at the moment :-)
Well if you've done all you can I think it's safe to guess the
economy is to blame. We could all use a turnaround.
Thanks for your response, Fritz.
Thanks to you as well, Pete. Sorry for the infernal delays in
responding. They're usually due to technical considerations beyond
our control (not
posted via a mainframe, etc.)
Thanks for your post. It seems pretty likelyt hat we won't change our
positions but sometimes these discussions are fun.
I enjoyed your mail.
"I used to write COBOL...now I can do anything."