I had to chuckle when the OP said "and 151 instructions." The VAX was the poster-child of CISC. Thats like, what, 20% of the instructions ARM has for SIMD alone?
Recently, I had to learn C++17 through 20. At the same time I was learning x86 assembly, and x86 assembly was far, far easier. Downright simple and straightforward in comparison.
Aren't high-level languages supposed to make our lives easier? Isn't RISC supposed to make CPU's simpler? What happened to our world?
It was not the number of instructions in the VAX ISA, it was their complexity and the multiple indirection level addressing modes.
Some of the instructions are truly mind-boggling even by today's complexity standards, e.g. queue operations (four! – INSQHI, INSQTI, REMQHI and REMQTI), polynominal evaluation (POLYx), index computation, CRC calculation via an optionally supplied polynominal table (CRC), a comprehensive set of character operations, operations on packed numbers and others. The VAX-11 instruction comes with examples in a high-level programming language (COBOL, FORTRAN or PL/1) and how that translates into the VAX-11 instructions, e.g.:
COBOL:
01 A-ARRAY.
02 A PIC X(10) OCCURS 15 TIMES.
01 B PIC X(10).
MOVE A(I) TO B.
VAX-11 assembly:
INDEX I, #1, #15, #10, #0, R0
MOVC3 #10, A-10[R0], B
Or, the bitwise use of INDEX:
PL/1:
DCL A(-3:10) BIT (5);
A(I) = 1;
VAX-11 assembly:
INDEX I, #-3, #10, #5, #3, R0
INSV #1, R0, #5, A ; Assumes A is byte aligned
> Recently, I had to learn C++17 through 20. At the same time I was learning x86 assembly, and x86 assembly was far, far easier. Downright simple and straightforward in comparison.
Somewhere along the lines of the development of high-level languages, "tedious and semi-unmanageable" got replaced with "difficult". Assembler isn't difficult, nor is it meant to be; any emulator or embedded developer could tell you this. Assembler just has few tools to ease development/code structuring and gets very flat and overwhelming in large code-bases; these are the problems C/C++, Pascal, Algol, etc were designed to solve.
Individual assembly instructions are basically trivial to grasp (the actual syntax is probably the hardest part.) The challenge is synthesizing larger, useful programs from the little tidbits of data manipulation. It's a great exercise in bottom-up hierarchical integration. As deaddodo notes, this is why higher level languages were developed, to raise conscious focus away from having to use such tiny primitives.
We are probably agreeing but Pascal at least was explicitly designed to allow and to teach “structured” programming. In some ways, it is designed to limit your options. Their is no denying though that it makes code easier to read and also makes intent much clearer. This in turn makes not just creating but of course maintaining and extending large code based much easier.
In terms of complexity, it depends on how you measure. In assembler, it does not take long to learn how to compare and jump. That gives you the power of while, do/while, for, if/then, and of course “goto” ( which your “more advanced” language may lack ). I would argue though that it is easier to understand the syntax of if / else than the equivalent assembler. Again, if not easier to write, it is at least more obvious when reading ( for less experienced eyes ).
Well, we're not disagreeing. Your first paragraph is basically just a summation of my point while the second paragraph reiterates it. Assembler is not necessarily difficult, it's tedious; which includes parsing and keeping it in mind. Most modern assemblers offer some sort of Macro system to even help with some of the aforementioned niggles.
But then, every so often, people do the whole trip in reverse starting from sandboxed Javascript on a locked down phone or console and putting together an exploit chain all the way down to kernel-mode assembly shellcode
We add layers upon layers of abstraction, but the layers below are never really obsolete, it's always very useful to understand them!
Or you're running code compiled by the trusted compiler that generates some intermediate bytecode to run on your Boroughs or IBM System 38 and you're locked out of writing to the "True" underlying hardware by "the man" on some locked down mainframe.
I don't understand this particular criticism of C++, and I generally think there are a lot of legitimate criticisms of C++. Yes, the language spec has become complicated. But you don't have to use every single feature in the language. As the slogan goes, if you don't use it, you don't pay for it.
I find that the style I like best is what John Carmack calls "C flavored C++". To me this means plain structs and functions, references instead of pointers when appropriate, and conservative use of: templates, constexpr, operator overloading, strings, vectors, threads, and synchronization primitives. This subset of the language is relatively easy to learn and pleasant to use. Still, I will have no problem jumping ship to one of the newer C++ replacement languages when one that I like becomes mature enough.
Years ago, when I considered C++ (and discarded it), it seemed that it was hard to write C++ without understanding most of the features. Let's not argue about THAT. But is there now an accepted small subset that can stand alone as a first layer of study and usage? Giving a real understanding of that subset and being able to write in that subset? Any text describes this?
This idea of layers in learning a language seems fundamental to me - (Perl 5 was - still is - amazing for that. Perl 6 / Raku is great but it lost that.)
> But you don't have to use every single feature in the language
With a large language like C++ it is indeed quite common that teams agree (or are agreed upon) a subset to use, yes.
> As the slogan goes, if you don't use it, you don't pay for it.
That's not quite true though. You don't get to chose which feature has been used by the people who wrote the code you're told to fix.
> I find that the style I like best is what John Carmack calls "C flavored C++".
And this is a matter of personal preference. I would have embraced a strict, statically type-safe, OO language with focus on code-reuse, maintainability and performance, not based on C (I do like C for what it is, but it clashes badly with the other goals).
>You don't get to chose which feature has been used by the people who wrote the code you're told to fix.
True, but that's more of a sociological problem than a technical problem. It's a question of how projects and teams enforce programming styles. I do see the advantages of a language like Go where it's been designed with a minimal set of features to restrict the range of crazy things you can do. But if your project has certain constraints, like not using GC, then Go is a non-starter. The big advantage of C++ is that it's suitable for making literally any kind of software, from embedded to web applications.
More powerful tools are often more complex to learn. A car has more controls than a push bike, and an aeroplane has more still. I'm not saying all the complexity in C++ is justified (!!!) but it's not too much of a surprise that it's more complex to learn than assembler.
If you use C++ like an assembler, then it's not complicated. You only use functions, loops, arrays and structs. And GOTOs.
Congratulations you now have replicated what an assembler can do, but it has become much easier (you don't have to remember details of the architecture, memory layout, etc.)
But C++ puts a lot of syntactical sugar (abstractions) on top that, if you learn and understand them, can help you write even more concise and expressive code. This is where complexity comes in. Higher abstraction brings complexity. Whether that is good or bad depends on your knowledge of the language/concepts.
how many addressing modes are there? in my mind, CISC vs RISC is more about the vast array of addressing modes on CISC vs load/store on RISC, rather than the actual size of the ISA
probably worth comparing to early MIPS or SPARC instead of modern ARM with extensions?
Exactly, it's not about the amount of instructions but of the modes those instructions can take. As I understand it, having primarily load-and-store / register to register instructions and avoiding complicated mode encodings that make various processor hw optimizations much easier, and make compiler writing a lot easier, too.
Classic CISC chips were "fun" to write assembly for by hand... and RISC chips are not. But if you're writing a compiler...
i would say the arm is fun to write assembly for by hand, in part because of its deviations from risc principles; i wrote a bit about the details last night in https://news.ycombinator.com/item?id=38899295
yeah my standard for "fun CISC" is the 68000, the first machine I truly wrote assembler for (after handrolling 6502 by POKEing opcodes). It was a pleasure. Though I don't miss big-endian, I can't think like that anymore.
Makes me wonder what the m88k instruction set was like, sadly a quick search makes it look like the only programming manuals were (now expensive) physical books and there's no simple PDF online.
in terms of systems design it looks pretty decent: 32 gprs, 51 instructions, load-store architecture, 20 megahertz, one instruction per clock for alu instructions, hardware floating point coprocessor, can do fp and memory concurrent with alu instructions, hardware multiplier, delay slot bit in branches, separate instruction and data memory buses. the big thing not specified in the manual is the price; a 20-mips processor costing US$10000 in 01988 is different from a 20-mips processor costing US$10. but of course what killed it was powerpc
apparently the mmu was a separate chip? with the cache on it? and that's why this manual doesn't document the page table structure? so it was only questionably a microprocessor
I thought it was already considered a failure before the arrival of the Power alliance? I seem to recall hearing about reliability etc problems with the few who tried it? I think NeXT went as far as building a box around it?
the 68000 had similar problems, you couldn't reliably restart an instruction after a page fault. the sun-1 worked around this by using two 68000s. as i understand it, usually you can work around reliability problems in cmos by stricter testing (at the fab, reducing yield), lower clocks, higher voltage, better bypassing and esd protection, longer setup and hold times, bigger heatsinks, derating industrial parts to commercial grade (or military parts to industrial grade), laser trimming, or occasionally higher clocks and/or lower voltage. not always, of course
parts that don't work at all are often unrecoverable, except through ridiculous workarounds like the sun-1 (one computer for the price of two!) or the 286 return to real mode via a cpu full reset (https://retrocomputing.stackexchange.com/questions/11954/how...) but parts that work sometimes can often be babied enough to get them to work reliably
yeah the 68k story is funny, I've heard that before, though I remembered it as Apollo that did that dual 68k trick.
It always confused me why Motorola didn't just axe the original 68000 entirely when it released the 68010 (with the fix for the page fault mis-design). there was no reason to keep producing the former after the latter came out, and it couldn't have cost them any more to make the '010 and it was pin for pin compatible (I've stuffed an 010 in Atari STs here with 0 problems, for example).
i think i was misremembering which company it was and you're right
pin-for-pin compatible parts always have some differences: power consumption, setup and hold times, ground bounce, slew rates, noise margins, hysteresis on slow transitions, drive strength, stability in the face of capacitive output loads, endurance when subjected to overvoltage or overcurrent conditions, etc.
dave jones of eevblog did a video a few years ago https://youtu.be/1VlKoR0ldIE about how he shipped a bunch of defective μCurrent burden-voltage-eliminating current sensors. in his tests it worked fine, but many of his customers started reporting inaccurate readings. after sufficient debugging, he figured out that the problem happened only when its output was connected to a voltmeter with a high input capacitance, which caused its output to oscillate, introducing both noise and an offset into the readings. it turned out that one of the op-amps was going into oscillation because it couldn't handle driving that much output capacitance, a common problems with op-amps, but why was this never a problem before? he'd switched over to a second source that made a chip meeting the same specifications, but which turned out to be more vulnerable to output oscillations
so, when you put a pin-compatible chip into a design in place of the original part, you have to carefully test the result to ensure that it still works properly. most times it will, but if it doesn't, you could have a product recall on your hands. worse, your customers may be incorporating your product into their own products
if you pull chips without warning, for example when you start producing competitive chips, every company who designed them into a product has to suddenly switch over to their supposedly pin-compatible replacements. some of them will have their products fail unless they redesign them. they will have to pull their products from the market. everyone who designed a product incorporating your original chip gets fired, and when they finally manage to get a new job, they swear to never again use a part from your company
and that's why lots of companies have been producing the μa741 and its clones like the lm741 since 01968, and why out of the 15522 different op-amps that digi-key lists as in stock, 1397 are listed as 'obsolete' and another 1042 are listed as 'last time buy'. and it's why a z80 you buy from zilog today still uses hundreds of milliamps
i'm not an electrical engineer, i just play one on hn
- the instruction encoding is a little odd, with 5 formats* (or more?) and the opcode split into two parts with a couple of register numbers between them, but nothing like the bizarreness of risc-v;
- 16-bit immediates orthogonally available for all integer arithmetic and bitwise instructions (not just, say, addi);
* 3-reg (incl. base+scaled-index, jmp, lots of other things), 10-bit-imm, 16-bit-imm (which arguably includes bb0/bb1/bcnd), crs/crd, base+imm
integer:
- 31 32-bit gprs plus a zero register;
- r31 is conventionally the stack pointer, but there are no push, pop, load-multiple, store-multiple, preincrement, predecrement, postincrement, or postdecrement facilities, so there are no special hardware privileges for a stack pointer register to potentially have;
- strangely, there are separate signed and unsigned add and subtract, which suggests that it's not two's-complement. that would be tooo bizarre for 01988 but i can't find any clarification anywhere. maybe it just affects how carries are handled?
- there is an integer divide instruction, two actually;
- though it doesn't have an arm-like optional barrel shift on operand 2, it does have a pdp-8-like optional bit inversion on it;
- they remembered to include the extract- and set-bit-field instructions the arm forgot which were added in thumb(2?), and they even have fancy things like ffs and ffc, and due to the bit inversion option it even supports abjunction (and-not), or-not, and xnor;
- in fact, the extract- and set-bit-field instruction can even read the offset and width from a register (they don't have to be immediate);
- and shift-right and shift-left are special cases of them;
- the manual suggests tricks with the add instruction and the zero register to manipulate the carry bit, but i don't see suggestions for how to emulate mov, which is odd;
memory:
- load-store architecture;
- apparently no halfword or even byte load or store instructions, but the ld and st instructions have a type field which allows them to do signed and unsigned halfword and byte loads and stores, orthogonally, and even doubleword loads and stores;
- if you set the b.o. bit in the psr (cr1), your loads and stores are little-endian, which i guess motorola thought smelled bad;
- only two addressing modes (base+unsigned-immediate and base+scale×index-register, but they count the second one twice) which is a lot less than arm but still more than risc-v; they claim seven but that's because they're double-counting base+scaled-index and also counting their jump target types;
- there's a lea instruction (lea) but it's not super-powerful because it only does those two modes, and the scale only goes up to 8;
- unaligned access is not implemented, not even 4-byte-aligned access for doublewords, but there's an option to truncate the address instead of trapping (maybe useful for pointer tagging);
- there's an external line to tell the mmu whether it's in supervisor or user mode, plus a bit in load/store instructions to tell supervisor-mode code to access user memory, so plausibly they could use separate memory mappings, though i don't know if the actual mmu they shipped does this;
- there's apparently no instruction cache flush instruction, so i suspect the (off-chip) caches had to implement a very expensive i386-like memory model;
control flow:
- there's a delay slot on branches, but a bit to turn it off;
- there is no condition code register, but there are compare instructions, which store their results in a gpr. it also has some compare-and-branch instructions like risc-v, some which trap to supervisor mode and some which branch pc-relative by a signed 18-bit offset (bb0, bb1, bcnd), but no branch-if-greater-than style instructions with two source registers;
- nevertheless, the psr has a carry bit in it, which you can use with .ci or .co suffixes on arithmetic, just no carry-conditional branch instructions;
- the call instruction (bsr/jsr) uses a link register (r1) instead of a stack;
etc.:
- built-in one-issue-per-cycle ieee-754 fpu, including gradual underflow, both signaling and non-signaling nans, and both 64- and 32-bit floats, but without transcendentals or even square root;
- no obvious way for a kernel to avoid saving fpu state when context-switching from a task that didn't use it;
- i don't think you can read or write the psr in user mode, so it's standardly virtualizable, but that also means you can't switch byte orders or misaligned access handling style in user mode;
- however, you can set the fpcr in user mode, so you can change the rounding mode for your interval arithmetic library;
- their only atomic instruction for smp synchronization is xmem, exchange register with memory, which is a bit weak (though not as bad as the 68000, which iirc only had test-and-set);
- the suggested calling convention has 13 call-clobbered registers (8 for parameters);
this should read '- there's a lea instruction (lda)'
also it reads like i think it's weird that it has 5 formats, which isn't what i meant
apparently in 01989 you needed to pay about US$450 for a processor and US$650 for each cache/mmu chip, and you did need two of them as i had inferred from the datasheet, which made it a rather pricey building block
> Aren't high-level languages supposed to make our lives easier?
Assembly code is easy to learn and easy to code simple programs, but it’s a whole toolbox full of foot-guns and it’s easy to make a buggy unmaintainable mess.
Many HN comments, as well as other online sources, suggest software developers generally do not like "simple". The majority find comfort in complexity.
RISC architecture has already changed everything. Or did you miss when Apple Silicon basically lapped everybody in performance per watt, while having raw performance numbers comparable to, if not exceeding, x86 hardware?
Sometimes, it really is the case that a great new technology doesn't exist until Apple invents it.
It's crazy, in another terminal I have an SSH session open to an RK3588 ARM SoC SBC sitting on the desk behind me. On it I am running benchmarks of my application -- which contains a custom virtual machine for a language interpreter and a custom database engine.
On throughput -- compared to my Ryzen 7 Thinkpad, it's getting 1/2 the performance on the DB engine side of things and 1/3rd for the VM. (This is on a single core, I don't have a good bench for multicore performance yet.)
The thing is... the laptop cost ~$3000, and is clocked way higher. The SBC cost ~$200. It's warm to the touch, but not hot. The little DC powerbrick it came with is totally fine, and there's no fan, just a heat sync.
I could stick a dozen of these in a 19" enclosure, and with the right programming have an absolutely awesome amount of computer for the price and power use.
I just wish I could get an ARM workstation-class machine that wasn't a Mac.
:( That's the bit we still don't have right. Apart from a few special cases, we really struggle to use a small number of threads, or a GPU, without enormous amounts of effort - let alone distributed processors.
I'm sure part of the problem is we depend on high levels of backwards compatibility with systems assuming mono-processing ... but I'd like to think there is a way.
Now if someone came up with a really usable and general model for software across distributed processors...
VAX (an acronym for Virtual Address eXtension) is a series of computers featuring a 32-bit instruction set architecture (ISA) and virtual memory that was developed and sold by Digital Equipment Corporation (DEC) in the late 20th century.
There is a UK company that makes vacuum cleaners under the brand of Vax. They suck well, indeed, although they do not last due to the atrocious build quality – I had one years ago (granted, I had to pay homage and buy a Vax vacuum cleaner), and it fell apart 2 years later.
NetBSD is one of those projects I wish got more resources (donations). They have some cool things going on (ex: rump, exotic hw) that no one else has, but lacks the resources to fully exploit them :(
I didn't see a reference to a joystick anywhere and there's none shown in the photo...?
Furthermore :) this spring I visited a PDP 11/73 running Spacewar with a joystick attached. Presumably that would have been attached as a QBus peripheral, and Vaxen could support QBus, so I'd actually be pretty surprised if there hadn't been a Vax with a built-in joystick. Depending how you define "built-in" that is.
Edit: Come to think of it I'd be very surprised if there weren't any flight simulators running off Vaxen and that would definitely involve a very much built-in joystick (along with the rest of the flight deck). I know a lot of those used PDP-11 so it would be the same lineage.
Can I join in? “hey let’s build a microcoded processor” is like my own personal Bat signal.
We starting from something like the AM2900 (Mick and Brick’s book)? Need to get familiar with the VAX instruction set and get that stress test program too. We could hook it up to Verilator and expose the whole thing to SDL to get graphical output so the same code that runs the “simulator” runs the actual hardware?
VAX is certainly the CISCiest of them all --- looking at the opcode map makes x86 seem like a RISC in comparison, as there's very little structure in it beyond the (copious amounts of) addressing modes.
I don't know much about the subject matter, but the minimalist website is so refreshing. The content is front and center, unvarnished and not occluded by various UI bells and whistles. I miss this version of the web. Navigating between pages actually seem to work as fast (if not faster) than many "modern" single page / progressive web apps.
If you use a trackpad, you can use the scale gesture (pinch-to-zoom, but outwards) to increase the magnification of most web pages. Very useful for those of us past age 40.
On iOS, you can also use the same gesture directly on the screen.
I like it because there is no nonsense, it is a pretty simple page that gets what it needs from browser standards without external libraries. The simplicity helps make it small and fast.
Even without external libraries there are lots of design choices that could change and perhaps improve the look of the page.
The website doesn't have a font choice, other than "sans-serif". It follows your preference; if you don't like it, choose another default Sans Serif font in your browser's settings.
This comment made me realize the historical term in the headline actually, for once, refers to the term’s historical namesake, unlike “VHS” and “Datasette” of late
“VAX” referring to VAX. Not to be taken for granted!
Nobody's sold new VAX systems since the 90s. VMS was ported first to the DEC Alpha architecture, then to Itanium, so even for VMS, VAX is two architectures ago.
And Itanium itself is at the end of the road now, but I think still supported for the moment.
A couple of years ago HPE sold VMS to VMS Software Inc, who have ported it to x86-64. So it’s three architectures ago.
The VAX version apparently wasn’t included in the deal (to the chagrin of hobbyists, as they can’t get a legally licensed version of VAX VMS anymore), which perhaps shows how relevant it is to modern VMS customers.
As I understand it it is part of the deal - but VMS Software need to make a full release for an architecture before they can license it - and Vax is clearly not worth the effort.
Sucks for me as I have an old VaxStation lined up for some spare time messing about :(
Incidentally HP themselves can and (I think) do sell and transfer licenses, but that's expensive and even then reputed to be hard to arrange as VMS is so obscure within the org. Definitely no hobbyist licenses though.
No. Actual VAX died shortly after Alpha became a thing, and Alpha died when Itanium came along. OpenVMS was ported to Itanium around 2010/2011, but HP shifted to x86-64 around 2015 as Itanium sales began to quickly drop. At this point, there’s x86-64, ARM, RISC-V in the west, with Itanium’s cousin Elbrus in Russia and some MIPS stuff in China. IBM still makes and sells POWER, but it’s a niche of a niche market.
If someone managed to make Alpha or VAX come back, he/she might end up sued into oblivion by HPE… and then possibly hired by them.
At this point you gotta wonder if the DEC name would have better marketing/branding power than HP.
Have to say, I'd be more stoked to buy a "Digital Equipment Corporation" branded server or workstation than "HPE ProLiant" or "HP Z900".
Leadership of Carly Fiorina and successors, the murder they committed on the Palm brand and product line, not to mention their whole world of junky printers,... I think have ruined their reputation of that brand forever for me.
https://mail-index.netbsd.org/port-vax/2021/07/03/msg003900....
https://mail-index.netbsd.org/port-vax/2021/07/03/msg003903....