Looking through a bunch of code in the archive, it was interesting to come across the macro for TELL, which is used everywhere in games as a sort of printf, and it has some special printf-like codes to print out names of objects and such. There are also some functions that have been declared with DEFINE rather than ROUTINE. Both the macros and the DEFINEd functions use the features of the greater MDL at will.
What is interesting in particular is how it possibly reveals their setup: they had a full MDL interpreter running on their workstations, and ZIL is a subset that could be cross-compiled to a stripped-down target: the Z-machine.
It looks like ROUTINE, OBJECT, ROOM and such are macros (in sources unknown) that would define standard MDL objects, possibly stored in some structure for findability. Then, the cross-compiler would be a program that takes these objects and writes out some Z-code. The workflow probably allowed (1) playing the games directly in the interpreter without compilation (2) making small changes to a running game and (3) recompiling the Z-code after having made those small changes, without needing to reload all the game's source code.
Except for the cross-compilation step, this is the sort of environment the original Zork would have been written in, and I find it hard to imagine they would have accepted anything less.
I enjoy thinking about the future of IF. I see it as a gaming/storytelling medium with a simultaneously huge potential but still vast chasms to cross before it gets there.
NLP etc. has great potential to improve the parsing capabilities, to be sure, but it's not obvious to me that that will dramatically improve the experience. Is `pick up the shiny brass thing` really better than `GET LAMP`? IMO, it might make IF more accessible to non-techy, non-text parser gamers, but that's about it.
I think the next leap for IF games will involve non-linear stories that are better suited to the medium. The success of games like "Papers, Please" in particular has sparked my imagination in this area.
I recall a Ludum Dare entrant which was an android app that pretended to be a walkie-talkie to control some security operatives you were telling to search a building to investigate some noises. Very spooky. And a very good use of some basic voice parsing and NLP.
Game jams are literally the best time to try risky concepts. If it fails, you’ve spent only a small amount of time, and there’s a good chance that people will play your game and give you feedback. I can’t think of a better time to take a risk.
I didn't mean it was a risk in any negative sense. I was just looking for the word "ambitious" and couldn't find it in my sleep deprived state...
Seriously, I'm so tired I looked up "ballsy thesaurus" and clicked related words for probably five minutes before I gave up looking and just typed 'risky' hoping it'd be good enough.
The most memorable IF I’ve played through was one[1] in which you were a captured spy being interrogated in semi-linear fashion. The game was a retelling of your capture, where every wrong choice amounted to ‘It could not have happened that way, go back and tell me what you really did.’
The only way to win was to slip one lie into the retelling, and then when the game catches up to the present moment...
Struggling with the limitations of the parser was one of the recurring frustrations with Infocom games. "GET LAMP" is fine, but anything more complex quickly becomes challenging. I remember getting stuck in "The Lurking Horror" because I couldn't figure out how to climb down off a ledge, despite having a ladder. Use ladder? Climb ladder? Climb down? Climb down with ladder? Climb ledge with ladder?
The problem to solve is game length and replayability.
If you don't make moon logic puzzles, your puzzles can be solved relatively quickly unless you're very, very good at making fair puzzles which are also legitimately difficult. Therefore, you're going to need more puzzles than an FPS or action game which incidentally has some puzzles in order to achieve the same total game length, and making content like that is legitimately hard.
Similarly, in order to make your games playable more than once, you need branching paths, such that you have different routes for the players to go down each time, and I really shouldn't need to explain the concept of "combinatorial explosion" in this forum. Add to that the expense of things like art assents and voice acting (unless you really think text adventures are going to come back in a big way) and it becomes prohibitively expensive to not railroad everyone down one main plot with, at most, a few minor side-treks which quickly rejoin the main line.
The reason why IF Games are so focused on puzzles is that they essential can't do anything else. Even dialogs are either about solving a puzzle or choosing a branch in the story.
The point about AI in IF would be to break that mold. Dialogs can be more challenging, for example you could make a game about interrogating people. Or you could tell NPCs to do certain stuff for you. Or these NPCs could have some advanced planning or decision making.
To make IF more interesting, especially for those outside a very narrow niche, you would need them to be a lot easier to navigate, and offer more challenges that don't look like puzzles and don't hit you with a brain-twister at every turn.
> The reason why IF Games are so focused on puzzles
I don't follow the IF world closely (though I often want to set aside the time to get into it), but surely they are also heavily focused on the prose too?
If you treat the text as just an interface to get to the underlying formal game mechanics, then sure, IF is just simple puzzles because there's no hand-eye coordination going on.
But if you treat the text as a fundamental part of the experience, then the puzzles are only a small piece.
By analogy, films are the world's shittiest games because — even though the graphics are nice – the gameplay is obviously pretty meager. The player doesn't get to make any choices at all! But with a film, the graphics are the point.
Particularly with newer games, this is absolutely true. There are a fair number of recent interactive fiction games that are leaning pretty heavily on the "fiction" part and, contra to the poster you're replying to, aren't necessarily focused on stumping you with puzzles. That's not to say that there aren't puzzle elements in almost all IF games, but increasingly, the language matters -- and sometimes it's woven into what puzzles there are.
When IF focuses on prose, and doesn't contain puzzles, it's harder to actually call it a game.
The one benefit of text games over other types of games is that you can essentially implement whatever world you want, without a trillion Dollar budget for graphics and sounds effects. A holo-deck powered by imagination.
Speaking of games in holo-decks... The killer-app for AI-powered IF may be erotic fiction.
This is why I think the "Papers, Please" paradigm is so interesting.
The overall storyline is replaced with a repetitive but novel task with increasing difficultly, and various other loosely coupled storylines are grafted onto the task in different ways. The task also replaces the explicit puzzles.
It is as though rather than being told the story, you are viewing vignettes of the story through a very narrow window
I do think there is a Sturgeon's Law problem where it's hard enough to come up with linear stories which are compelling. Based on some of the experimentalism in modern day game development in places like ichi.io hopefully they can come with some compelling genres
I'm not going to go down some half finished path in Bandersnatch when Black Mirror has enough problem with single stories.
Seems like smart speakers are a good fit, my eight year old spent a little while playing with an IF Alexa skill not long ago. Of course, backtracking leads to a lot of agonizing repetition, which isn't really different than seeing it in text, but at least there you can skim over descriptions you've already seen.
Well, I would like to think that the ties of interactive fiction to text as the mode of interaction are likely the first to be broken.
Both speech recognition and speech synthesis have progressed by leaps and bounds. Computer graphics have progressed to a point where human characters can be made expressive and nuanced.
Imagine an engine that that takes games scripted as interactive fiction, procedurally generates the game world from the prose (far fetched, but first steps exist as research papers), populates it with the characters from the text and makes them act out the written dialog while honoring the implicit stage directions given in the written form ("he whispers in your ear" etc...). All of that presented in VR, which would make having a dialog with these virtual characters more natural.
Looking at the tech that exists today, I believe that we are about halfway there.
I love this idea, but encourage people not to get hung up on the UI--ie how you interact with the game world--and instead think about what's going on in the game world.
What if the world included a number of fully independent agents with their own motivations, values, and policies and equipped with pattern matching, goal solving, optimization, and even a hint of common sense[1]. The game state would never be the same twice, and it would depend on how you interacted with the agents and they with each other.
TlDr: If you open a door, you might let the Grue out of its maze.
I suspect that making it not feel like you're pushing buttons to get the same lines of dialogue out of wandering bots is going to require solving some pretty hard AI problems.
You could probably make it work now with, say, a game world full of broken robots or something.
Very cool! Being an IF collector, I acquired the source code to Restaurant at the End of the Universe years ago. It's generous to call it unfinished—it's barely a game. I wasn't sure whether or not I could publish it, but it's great that this and much more are now available to everyone.
I also have some internal Infocom emails and photos and things. Not sure if posting those would be welcome by the participants since most of them are still alive.
edit: if I had access to that data, I'd be interested in finding any hidden source files -- there's a copy of the source to a version of the ZAP assembler hiding in one of the minizork repos Jason Scott posted, for example. Might there be some others?
> Not sure if posting those would be welcome by the participants since most of them are still alive.
Genuinely curious... is it okay to post private details of a person once they're dead? There are things that I want to keep private while I'm alive, and that I hope stay private when I'm dead. Does other people's curiousity trump my wishes just because I'm no longer around?
In the US, it's reasonably well established that you have no privacy rights after death[1].
That doesn't necessarily make it safe to publish details about someone after they are dead though as property rights do exist through the estate of the deceased, so you may have to end up defending yourself from a defamation suit.
In terms of it being "okay" there are unwritten rules about allowing some not-well-defined period of mourning before publishing private details, particularly ones that reflect poorly on the deceased.
Publishing e.g. private correspondence of people from just 100 years ago seems completely non-controversial though.
I have not programmed in ZIL or Inform, but have made implementations of Z-machine, as well as a "Tricky Document" describing various optimizations that can be used in a Z-machine code (such as Black-Johansen text packing, the SET->BCOM optimization, storing introduction text in the input buffer, etc).
Wow, these are layers upon layers of archaic stuff. The article got me curiohs about Inform7 so i watched the introductory screencast on Vimeo https://vimeo.com/4221277 . I‘m blown away! It appears to be easy to learn, feature rich and also well documented. Oh and it‘s also free!
If you are interested in IF, go check it out.
Then you got your hashtag brackets, which pop up in the corner of the TV screen reminding you that there's another way to CHTYPE:
#FORM (+ 1 2) ;"same as <+ 1 2>"
And your hashtag voucher brackets, which are really just defective angle brackets you can exchange for a free hashtag:
<> ;"evaluates to #FALSE ()"
Finally, you got your bogus brackets, things that look like they could be a new kind of exciting brackets, but when you look closely they're just prefix operators that tend to appear before brackets:
'(OSCAR WILDE) ;"same as <QUOTE (OSCAR WILDE)>"
%<+ 1 2> ;"evaluate immediately"
%%<CRLF> ;"evaluate and discard result"
> So parens are just a quoted list? Seems a little unnecessary
It's a very useful distinction. In regular Lisp, lists are the only data type that is not self-evaluating. This leads to confusion because the serialization of a list can mean different things depending on context. Consider:
(eval (add 1 2))
(eval '(add 1 2))
Those forms both produce the same result when evaluated. It happens by two very different execution paths, but this is not immediately apparent when you run this code. So a beginner may well be surprised to learn that the following:
(eval x)
(eval 'x)
may or may not produce the same result. So it's genuinely helpful to have two types of lists, one of which is self-evaluating, and the other which isn't, and to have syntax that distinguishes the two.
Another example: all of these produce the same result:
and so on. But this is true only because (+ 1 2) evaluates to 3 which is self-evaluating. Contrast that with, say:
(eval (cons 1 2))
which immediately produces an error. Trying to figure out why "add" "works" and cons doesn't and how to "fix" this can be very frustrating for a beginner.
Parens give list literals like in most modern languages.
An interesting thing about MDL is that both FORM and LIST have the same "PRIMTYPE" so all the usual lisp-like list manipulations apply to both. A strange thing is the way these are implemented: pretty much every object (with fixed-size allocation) has a NEXT pointer, and a FORM or a LIST is an object that has a pointer (possibly null) to the first object. Taking the tail of a list entails allocating a new LIST object that points to the second element.
You may wonder: how can an element be part of two lists at the same time? Well, you copy the object since it's in a 36-bit word. (Objects that do not fit in such a word are put in lists using a special DEFER object that points to it.)
I don't agree with your assessment; parens are not giving a list literal since (1 <+ 2 3> 4) has been reduced to (1 5 4). Calculation cannot take place in a literal, by definition.
What this is doing is binding the parentheses to a list constructor. I.e. (1 2 3) is really something like <list 1 2 3>. It's more like a quasiliteral (like Lisp's backquote).
Unfortunately, the fact that the printed representation of a list is the same as the code which calculates it makes things somewhat "muddled", pun intended.
What if we have this: '(1 2 3 (4 5 6)). What is (4 5 6) here? It can't be the list (4 5 6), because (4 5 6) is code that constructs that list. Using the same printed rep for both is a very bad idea.
I meant list literal in the Python-like sense, so apologies for the confusion there.
In MDL, parens and angle brackets denote LISTs and FORMs, respectively. These are both objects with the same PRIMTYPE (a linked list) but are tagged differently. The evaluator evaluates these objects differently from each other: a LIST has each of its elements evaluated, with the results put in a new LIST. It truly is distinct from a FORM that has the same effect.
Distinctions like this were designed to make the corresponding parts of Lisp (like what you mentioned) less confusing.
Indeed, in your quote example, that (4 5 6) is a LIST, not a FORM, avoiding the very bad idea. Quasiquotation pretty much doesn't need to exist because of the ! sequence "operator" for splicing lists into lists and forms.
>Quasiquotation pretty much doesn't need to exist because of the ! sequence "operator" for splicing lists into lists and forms.
Hmm... I guess I don't know everything quasiquoting is used for in Lisp, but in MDL and ZIL, segments don't really solve the problem that quasiquoting does when it comes to writing macros.
That is, I usually want to use macros to generate code according to a template:
<PRSI? ,FOO>
should become
<==? ,PRSI ,FOO>
The way to write that macro is to call FORM to build the form:
<DEFMAC PRSI? (X)
<FORM ==? ',PRSI .X>>
But if it's a complex block of code, and the blanks I want to fill in are deeply nested, the template quickly becomes unreadable, because every structure turns into a form, and every form turns into a call to FORM.
Yeah, it does not completely replace quasiquotation, but in my experience the main things quasiquotation solves are (1) not having to write quote marks in front of every symbol and (2) not having to append lists yourself. MDL handles the first by making atoms self-evaluating (though the flip side is that you have to use , and . everywhere), and the second is handled by sequences. I agree that something like quasiquotation would make FORM construction in macros nicer to look at, though it is pretty nice that sequences work everywhere and not just inside quasiquotations.
As an example, consider the following possible implementation of PROG1 in terms of LET's implicit PROGN. They both evaluate a sequence of forms in order, but the first returns the value of the first form, and PROGN the last:
I chose this example to demonstrate how sequences alleviate the pain of splicing things into the middle of things.
Backtick is not used in MDL, right? And you're somewhat free to extend the MDL used in Zilf, right? You could get the best of both worlds by adding either a PREFORM type or an REBUILD form. For example:
`< ... > or `< ... `> -> #PREFORM ( ... )
or
`struct -> <REBUILD struct>
The semantics would be that a PREFORM would be evaluated like a LIST, but then it would CHTYPE to FORM. Or, a REBUILD is an FSUBR that expects a structure, and it maps EVAL over the elements of that structure.
It doesn't run on any available MDL implementation, because it's actually for MIM, an even more obscure and undocumented extension of MDL. ZILF implements some parts of MIM, but doesn't implement READ-TABLEs at all, and the format of the table implied by that code is different from what's documented for MDL.
It was the early 80s. Readability wouldn't be invented for about another 20 years or so.
Actually kinda serious. You need to be able to spare the RAM to have the comments, whitespace, large variables, breaking things nicely into functions, etc. I've got modules I've documented where the documentation alone would fail to fit into, say, a Commodore 64's RAM, as plain text. Early 1980s "big iron" would still be considered on the high end of embedded programming today. (In the sense that a Raspberry Pi is embedded programming.)
I first learned programming in BASIC on an 8K PET. The 8K was shared by the source code, data, the executing program and display memory.
Spaces were for suckers. Each line had a few bytes overhead so you put as many statements on each line as you could. You stuck to 1 character for your common variables (only 2 characters were significant anyway). You used the abbreviated name for commands. (This was around 81, 82.)
This is from memory, so probably not actually correct code but this is meant to print "IHN!" 10 times, wait about a second and then repeat. T is used as a FOR loop variable twice.
They developed these on a https://en.wikipedia.org/wiki/PDP-10. It's not like they were editing/compiling on the C64. Of course targeting small systems does influence the coding style, but...
A top-end PDP, according to that page, could address ~8MB, though I'm sure they didn't have it for a while; other charts on the internet suggest a top-end PDP-10 would get up to 33 mega Hz.
8MB for an embedded controller is no great ask and you have to crawl a ways down the spec sheet pile to get a 33MHz processor nowadays. A top-end PDP-10 would today put it roughly on par with an Arduino [1], although the Arduino would be a bit short on RAM.
Even if you have the space to spare on comments and long variable names, you've been raised in a culture that considers those insanely expensive extravagances.
The indentation and all-caps are not modern, and the MACLISP primitive names like TERPRI are unchanged from the 60s. Otherwise, the coding and commenting style would not really raise any eyebrows today.
I first learned of literate programming from this culture instead of Knuth, e.g. https://dspace.mit.edu/handle/1721.1/41983 ("The fourth section presents a completely annotated interpreter for AMORD").
I'm just saying yeah, expectations have definitely evolved, but it's not the case that PDP-10 hackers back then had not yet discovered or valued readability.
Yes, of course some people had that opinion. It was a fringe position back then, though.
It's less fringe today, but I'm still not sure "code should be optimized for reading, not for writing" isn't still a fringe position, at least as evidenced by what people's actions say about their beliefs, rather than their words. Lip service of that is probably at least not fringe any more, which is progress of its own sort.
All I'm saying is that the Infocom Imps developed on a computer that was plenty big enough for comments and readable names to be no extravagance. The sources above show it happened in practice and not just in theory. The company started after they'd been used to this kind of environment for years. (And they were at least adjacent to actual literate-programming pioneers.)
I'm not saying they were into literate programming, I'm saying it's misleading here to talk about what coding on the C64 was like.
That's a really interesting perspective. If I think about it an application really doesn't need to be large or complex at all to accrue a couple hundred kB of source code, libraries not included.
Angle brackets set out FORMs, parens set out LISTs. In English: FORMs are to function calls as LISTs are to list literals.
A translation from MDL to Python would look like this:
<a b c> -> a(b, c)
(a b c) -> [a, b, c]
(This is in contrast to a sibling comment: in MDL
(<+ 1 2> <+ 3 4>)
would evaluate to
(3 7)
)
However, there is a bit of a caveat in the translation. In MDL, atoms are self-evaluating, and it takes a call to LVAL or GVAL to obtain their stored values (as a shorthand, dot and comma prefixes give LVAL and GVAL calls, respectively). I haven't been able to find out from exactly which scope the evaluator gets the definition of a function. Which of the following two corresponds to what the evaluator actually does?
Thanks! (And thanks for having those docs up, they have been useful to me the past couple days.)
Another thing I couldn't find is exactly what a trailer with an empty OBLIST means. Does it use <ROOT>? It shows up in TELL for the uses of RETURN!-, which probably are to use a non-overridden version of RETURN.
(Or any idea what SETG20 is supposed to mean? It shows up in Bureaucracy.)
Yeah, if there's nothing after the final !- in an atom, it means the atom is in <ROOT>. (See section 15.5 in the manual.)
I'm not sure what the purpose of things like <RETURN!-> and <ORB!-> is. It could have something to do with the way they implemented Z-machine opcodes as MDL atoms, i.e. <RETURN!-> forces it to use the MDL SUBR instead of the opcode? Maybe they moved the MDL atoms that collided with Z-machine opcodes to a separate OBLIST, and this was needed to access them from MDL code in the same file?
I never figured that part out, so when ZILF is generating code, it uses the same OBLISTs it used while interpreting and just looks things up by their PNAMEs.
SETG20 and DEFINE20 are used in "MDL-ZIL" files, where routines are defined with DEFINE instead of ROUTINE, global variables are created with SETG instead of GLOBAL, etc. Presumably that was a way to run the games in MDL during development to avoid recompiling them. SETG20 and DEFINE20 are aliases for the MDL versions of SETG and DEFINE.
Yes, it's a cruel twist of fate that all this news about Infocom's ZIL is happening shortly after the launch of a cryptocurrency called "ZIL", in the same month as a conference called "Infocom". Twitter search has been useless.
It appears to be some kind of ethereum-like-but-different cryptocurrency focused on smart contracts (as opposed to a general computer that can implement smart contraccts).
What is interesting in particular is how it possibly reveals their setup: they had a full MDL interpreter running on their workstations, and ZIL is a subset that could be cross-compiled to a stripped-down target: the Z-machine.
It looks like ROUTINE, OBJECT, ROOM and such are macros (in sources unknown) that would define standard MDL objects, possibly stored in some structure for findability. Then, the cross-compiler would be a program that takes these objects and writes out some Z-code. The workflow probably allowed (1) playing the games directly in the interpreter without compilation (2) making small changes to a running game and (3) recompiling the Z-code after having made those small changes, without needing to reload all the game's source code.
Except for the cross-compilation step, this is the sort of environment the original Zork would have been written in, and I find it hard to imagine they would have accepted anything less.