Hacker News new | past | comments | ask | show | jobs | submit login

Many older BASIC implementations are interpreted, and it looks like Tiny BASIC was as well:

http://en.wikipedia.org/wiki/Tiny_BASIC

It's strange to look back and see just how popular virtual machines were at the time. BASIC typically used one, as did many other languages. Smalltalk is famous for using a virtual machine, for example. Microsoft's original Mac apps all ran bytecode in a virtual machine.

It seems crazy, because these computers were already tremendously slow, relatively speaking, and adding a virtual machine makes it much worse. However, it was ultimately a useful tradeoff because these machines were even more limited on RAM than they were on CPU power, and using a virtual machine with bytecode that allowed for an efficient instruction encoding could save a lot of space. It doesn't matter how fast your code runs if it doesn't fit in RAM, after all.




Speed is of course relative. Interpreted BASIC is faster than working things out by hand. For something critical, Assembly Language was always available.

On a TRS-80 Model 1, compiling means the compiler, the input and the output have to live in 4k of ram (or 16k in the later versions).

Considering that the Level I TinyBasic interpreter lived on a 4kB ROM; Level II lived on a 12kB ROM; and Mass storage for most early machines was audio tape - not only slow but also notoriously prone to not loading files correctly, the compiling code would have been great for masochists, not so good for people who were just trying to get something done.

And that's before considering the complexities of tuning a compiler to optimize code.


Having been there, there exists an intermediate step of tokenized code. So you store ascii strings as .. ascii but as you enter source code a "then" as in if/then gets tokenized into hex 0xD6 or something. So the poor CPU doesn't have to run a full lexer at runtime to see if the "t" belongs to "to" or "then" it just matches hex 0xd6 which is much faster. This works real well if you have 128 (or so) or less tokens in your language. This can also save a huge amount of memory, depending on your coding style I suppose.

Tokenization also allows some syntax error detection to occur as you type code in, which was interesting. I don't remember enough about this. Obviously some mistakes won't tokenize at all or will tokenize into gibberish.

So tiny basic in memory stored plain old ascii and saved plain old ascii to cassette tape. lvl2 msbasic stored tokens in memory although it could optionally save pure ascii to cassette tape. This had some interesting software distribution issues and compatibility issues as it was sorta kinda half way possible to save something on lvl1 and load it into lvl2 if you were careful and vice versa.


The TRS-80 model III level 2 didn't run tinybasic anyway, it was licensed msbasic. Same as basica on dos. Applesoft basic was msbasic plus some graphics.

Level 1 basic was pretty much a model I 1979 thing only. I believe level 1 was technically available for the M3, but...

The article was more or less contemporary with the M4 which was 80 columns and used a licensed ldos instead of trsdos and I'm pretty sure was level2 basic only. So by the time of the article L1 basic was about two generations and 4 years out of date.

Also I recall Radio Shack sold the L2 upgrade eprom for something ridiculous like $19 so a L1 only machine was probably a 1979 experience (before the release of L2) or somewhat unusual in not having been upgraded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: