Re: Used interrupts on both 68k & PIC, want 68k w/onboard memory & JTAG/BDM

linnix wrote:
From the very start, the 68k had a 32-bit programming architecture.
For cost reasons, the implementation used a 16-bit ALU and datapath, but
all the registers were 32-bit, and all instructions supported 8-bit,
16-bit and 32-bit widths (even though the 32-bit versions took twice as
many clock cycles). This meant that when 32-bit ALUs became
economically feasible, the 68k just got faster with the same software,
unlike the x86 architecture that got seriously ugly in the move to 32 bits.

In a way, we have to thank the x86 marketer for beating the 68k.
Otherwise, many programmers would stay with assemblers and C would not
be as popular. C masks out the ugly x86 architecture.

People have used C on the 68k for about as long as there has been C (68k cpus were a popular choice for early unix workstations such as Sun's first machines, unlike x86 which only gained serious *nix popularity with Linux. It was also the original target for gcc). Writing a C compiler for the 68k is peanuts compared to writing one for the x86, since the 68k has a wide set of mostly orthogonal registers, plenty of address registers, and addressing modes ideal for C. Getting the best out of an x86 device is a black art, and it was a long time before C compilers could compete with professional x86 assembler programmers. So I expect most serious x86 development was still being done in assembly long after C (and other high level languages) were standard on the 68k (the Mac OS being a notable exception, written mostly in assembly for some reason).

The legacy of assembly on the x86 is one of the reasons why the instruction set is so hideous - it has had to keep 100% binary compatibility because you can't just recompile your assembly code for a new architecture. The 68k architecture, on the other hand, has seen many binary incompatible changes (such as the removal of rarer addressing modes) to improve efficiency.