i'll reply in more detail later but for the moment i just want to clarify that i'm not za3k, although i've been collaborating with him; his priorities for the zorchpad are a bit different from my priorities for the zorzpad
you said:
> I think it's the primary use case like Java Applets and Flash preceded JS as iterations of sorta-portable and low efficiency tools which make up for it with sheer volume of high-familiarity material.
i wasn't able to parse this sentence. could you unpack it a bit?
you said:
> If you'd like, I can provide more specific suggestions.
yes, please! even if our priorities aren't exactly aligned i can surely learn a lot from you
np! I'll comment now in case it gets weird about a double-comment on the same parent by the same user.
> > I think it's the primary use case like Java Applets and Flash preceded JS as iterations of sorta-portable and low efficiency tools which make up for it with sheer volume of high-familiarity material.
> i wasn't able to parse this sentence. could you unpack it a bit?
Software that doesn't care about being well-made or efficient. It doesn't matter because it's either fun or useful.
> > If you'd like, I can provide more specific suggestions.
> yes, please! even if our priorities aren't exactly aligned i can surely learn a lot from you
This is a long topic, but some of it comes down to Decker[1] vs Octo[2]'s differences:
* Decker can be used to make and share things that process data
* The data itself can be exported and shared, including as gifs
* Octo can't really, but it does predate the AI commnity's "character cards" by shoving game data into a "cartridge" gif
* Octo has LISP Curse[3] issues
I'm serious about the LISP curse thing. In addition to awful function pointers due to ISA limitations, everyone who likes Octo tends to implement their own emulator, tooling, etc and then eventually start down the path of compilers.
I haven't gone that far yet, but I did implement a prototyping-oriented terminal library[4] for it. Since Gulrak's chiplet preprocessor[5] is so good, I didn't bother with writing my own.
> i just want to clarify that i'm not za3k
Ty for the reminder. On that note, the larger font sizes you brought up are seeming more important in this moment. I don't think I can deal with 4 x 6 fonts on tiny screens like I once could. HN's defaults are already small enough.
with respect to winestock's account of the 'lisp curse', i think he's wrong about lisp's difficulties. he's almost right, but he's wrong. lisp's problems are somewhat social but mostly technical
basically in software there's an inherent tradeoff between flexibility (which winestock calls 'expressiveness'), comprehensibility, and performance. a glib and slightly wrong way of relating the first two is that flexibility is when you can make software do things its author didn't know it could do, and comprehensibility is when you can't. bugs are deficiencies of comprehensibility: when the thing you didn't know your code could do was the wrong thing. flexibility also usually costs performance because when your program doesn't know how, say, addition is going to be evaluated, or what function is being called, it has to check, and that takes time. (it also frustrates optimizers.) today c++ has gotten pretty far in the direction of flexibility without performance costs, but only at such vast costs to comprehensibility that large codebases routinely minimize their use of those facilities
comprehensibility is critical to collaboration, and i think that's the real technical reason lisp programs are so often islands
dynamic typing is one example of these tradeoffs. some wag commented that dynamic typing is what you should use when the type-correctness of your program is so difficult to prove that you can't convince a compiler, but also so trivial that you don't need a compiler's help. when you're writing the code, you need to reason about why it doesn't contain type errors. in c, you have to write down your reasoning as part of the program, and in lisp, you don't, but you still have to do it, or your program won't work. when someone comes along to modify your program, they need that reasoning in order to modify the program correctly, and often, in lisp, they have to reconstruct it from scratch
this is also a difficulty in python, but python is much less flexible in other ways than lisp, and this makes it much more comprehensible. despite its rebellion against curly braces, its pop infix syntax enables programmers inculcated into the ways of javascript, java, c#, c, or c++ to grasp large parts of it intuitively. in lisp, the meaning of (f g) depends on context: it can be five characters in a string, a call to the function f with the value g, an assignment of g to a new lexical variable f, a conditional that evaluates to g when f is true, or a list of the two symbols f and g. in python, all of these but the first are written with different syntaxes, so less mental processing is required to distinguish them
in traditional lisp, people tend to use lists a lot more than they should, because you can read and print them. so you end up with things like a list of a cons of two integers, a symbol, and a string, which is why we have functions like cdar and caddr. this kind of thing is not as common nowadays, because in common lisp we have defstruct and clos, and r7rs scheme finally adopted srfi-9 records (and r6rs had its own rather appalling record system), although redefining a record type in the repl is pretty dodgy. but it's still common, and it has the same problem as dynamic typing, only worse, because applying cadr to the wrong kind of list usually isn't even a runtime error, much like accessing a memory location as the wrong type in forth isn't
this kind of thing makes it significantly easier to get productive in an unfamiliar codebase in python or especially c than in lisp
40 years ago lisp was vastly more capable than the alternatives, along many axes. smalltalk was an exception, but smalltalk wasn't really available to most people. both as a programming language and as a development environment, lisp was like technology from the future, but you could use it in 01984. but the alternatives started getting less bad, borrowing lisp's best features one by one, and often adding improvements incompatible with other features of lisp
as 'lightweight languages' like python have proliferated and matured, and as alternative systems languages like c++ and golang have become more expressive, the user base of lisp has been progressively eroded to the hardest of hardcore flexibility enthusiasts, perhaps with the exception of those who gravitate to forth instead. and hardcore flexibility enthusiasts sometimes aren't the easiest people to collaborate with on a codebase, because sometimes that requires them to do things your way rather than having the flexibility to do it their own way. so that's how social factors get into it, from my point of view. i don't think the problem is that people are scratching their own itches, or that they have less incentive to collaborate because they don't need other people's help to get those itches thoroughly scratched; i think the social problem is who the remaining lisp programmers are
there are more technical problems (i find that when i rewrite python code in scheme or common lisp it's not just less readable but also significantly longer) but i don't think they're relevant to octo
TL;DR: You may be right about LISP issues, but my intent was show how IO and social factors seem key to platform success
I'll skip discussing Python for the moment because I think there are a lot of things wrong with it. If I try, we'll be here forever, but they mostly come down to not enough consistency and it being pointless to discuss for reasons which will become clear below.
Whatever the nature of the cause/effect of the LISP ecosystem, I see Octo's issues as similar in result:
1. There is no way to load and persist useful data in any popular CHIP-8 variant
2. Once people get a taste, the lack of IO means they do one of two things:
* leave
* become emulator or assembly developers
3. They few who stay in Octo or CHIP-8 are interested in writing their own tools, libraries, and ISA variants
Keep in mind, I still see this as success. This is based on my understanding of the Octo author's post saying farewell to running OctoJams[1] and my time as a participant. It got me interested in assembly and has helped many other people learn and begin to use low-level tools.
Compare this to Uxn:
* Although flawed, it has useful IO capabilities
* People have built applications which they use to perform real work every day
* An already-familiar ecosystem of tools exists to cater to the interests of similarly-minded people
> 40 years ago lisp was vastly more capable than the alternatives, along many axes. smalltalk was an exception, but smalltalk wasn't really available to most people. both as a programming language and as a development environment, lisp was like technology from the future, but you could use it in 01984.
Yes, I think that's getting at what I mean. One of the motivating factors for 100r was that Xcode is bad fit for people who live on a boat with solar-powered computers. So are Visual Studio or PyCharm.
Although the Uxn ecoysystem is far from optimal, it's still better than those heavy IDEs. Better yet, it was already mostly here a few years ago. Even to this day, it feels like it's from some point(s) in a better timeline where the world wasn't full of leftpad/polyfill/AI or whatever the next ridiculous crisis will be.
In other words, Uxn is a pidgin[2] of ideas "solarpunk" or even just small, understandable computer types like. At the same time, it does this without too much purism holding it back:
* Simple enough and familiar enough mish-mash of forth and low-color, plane-based display
* Low enough power that its cheaper than Xcode to run without more effort
* Enough IO capabilities to get things done
Since it already had all of this as of a few years ago, it's had time for social effects to take off. For example, you can share a picture, a music track, or text
made using Uxn-based software. Those files will still work even if the Uxn spec evolves in backward-incompatible ways again. Its design choices are also inherently good for (or limited to) producing art aligned with the aesthetics of the creators. That seems to be part of a self-reinforcing popularity loop at this point.
Although it's theoretically possible to make similar things using CHIP-8 based tools, nobody bothers. Even if you record the output of music sequencers which have been written for it, you can't really save your work or load in data from elsewhere. In theory, there's the 16 bytes of the persistent state registers in XO-CHIP which could be repurposed from a calculator-like thing into IO, but the fact I'm mentioning it should underline the main issue in that community: purity concerns. Those limit it more than having "real" 16 buttons do. Yes, you could do some interesting multiplexing or even just bit-shift every other button press to make a pretend terminal-like keyboard, but the purity focus inadvertently kills interest more than silly function pointer syntax.
Decker is the complete opposite of Octo in this. It also goes even farther than Uxn in giving up purity concerns. Sure, it's complicated and inefficient, but it is inherently built around sharing widgets with others in a mishmash of HyperCard and Web 1.0 revival spirit:
1. You can make gifs with it, and this is probably its most popular use in the form of Wigglypaint[3]
2. When asked "um, can I borrow source from it?", the answer is "Absolutely!"[4]
This is why as much as certain Python projects frustrate me, I'm not trying to fix their inherent Python-ness right now. I just accept they're what they are. As you pointed out, it's better than the alternatives in many cases. Before you ask, I won't go as far as calling JavaScript is good, but I'll admit it's shareable. :P
yeah, the chip-8 design is less ambitious than the uxn design and can only really handle very limited programs. on the other hand, it does run on much smaller hardware
i wouldn't go so far as to say uxn is flawed; that suggests it's fundamentally unsound. as far as i know, it isn't; it's a perfectly workable design, and it's better than anything else so far for frugal write-once-run-anywhere. i think a different design would be better in important ways, but vaporware is always better than actually implemented systems
> One of the motivating factors for 100r was that Xcode is bad fit for people who live on a boat with solar-powered computers. So are Visual Studio or PyCharm.
that's true, but you're damning uxn with faint praise here. i suspected that if you benchmarked left¹ (which, incidentally, does do syntax highlighting) against a default configuration of vim or gnu emacs, you'd find that left is the one that consumes the most power
but then i tried it, and also compared an extremely minimal editor called `ae`, and left seems to use a third as much power as emacs, but five times as much as vim and 300 times as much as ae
ae and left could plausibly run on the zorzpad. vim and emacs cannot; they need far too much ram. ae and left also have in common that they both lack undo, but left is dramatically more capable and easier to use
— emacs power usage —
i've been running my current emacs process for a week and (not counting cpu time used by the x server) it's used 34 minutes and 15 seconds of cpu, which is about 0.3% of one core of this amd ryzen 3500u. if we estimate that the cpu draws 29 more watts from the battery when running full-speed than when completely idle, and can run about 3 instructions per clock per core on each of 4 cores and 3.1 gigahertz working out to 36 billion instructions per second, emacs is consuming about 90 milliwatts and running about 100 million instructions per second on average
that's a lot higher than i expected, and possibly actually higher than left (i haven't tested) but it's certainly not in the same league as vscode. (this is emacs 28.2, and somewhat to my surprise, system-configuration-options tells me it's built --with-native-compilation, so perhaps it's not using the inefficient old bytecode interpreter.)
as a more precise test, to test emacs's built-in ide functionality rather than gtk and my proliferation of elisp packages, i ran `valgrind --tool=cachegrind emacs -q -nw --no-site-file` and wrote this c program in it:
#include <stdio.h>
int main(int argc, char **argv)
{
char buf[256];
printf("What's your name? ");
fflush(stdout);
fgets(buf, sizeof buf, stdin);
for (char *p = buf; *p; p++) if (*p == '\n') *p = '\0';
printf("Oh, hi, %s! Lovely to meet you!\n", buf);
return 0;
}
syntax highlighting was enabled, but -nw runs it in the terminal. i compiled it, fixed bugs in it (at first i didn't have any but i wanted to ensure i was doing a fair test), jumped to error message locations, jumped to system header files, looked up manual pages in it, ran a unix shell in it, and ran the program in the unix shell and interacted with it
this took about four minutes and 8.8 billion instructions. (just starting up emacs and shutting it down that way takes 595 million, but i wanted to get an estimate of the steady state.) this is about 30–40 million instructions per second, not counting the instructions of the compiler, shell, terminal emulator, and x-windows server; so 100 million per second with a more elaborate configuration and larger files seems like a plausible estimate
— ae power usage —
i wrote the same program again in anthony howe's 'ant's editor' from 01991⁰, which is about 300 lines of c in its non-obfuscated form, and is roughly the simplest programming editor you can actually use; the user interface is a stripped-down version of vi
where you exit insert mode with control-l, write the file with
capital w, and exit with capital q. this took about 7 minutes (i kept hitting the wrong keys) and 39 million instructions, of which about 0.8 million were startup and shutdown. that's about 90 thousand instructions per second, or 100 microwatts: a thousand times faster than emacs, and within the capabilities of a commodore 64 or apple ][. of course that again doesn't account for the window system, compiler, and terminal emulator, but it does account for ncurses computing the minimal updates necessary to the screen, so it's most of the work needed to avoid redrawing unchanged areas
38 million instructions divided by 278 bytes of output is about 137000 instructions per byte, but of course moving around in a larger file takes longer
— uxn left power usage —
running uxnemu left.rom in the same way from https://rabbits.srht.site/uxn/uxn-essentials-lin64.tar.gz takes 481 million instructions to start up and shut down. writing the same c in left.rom took about 5 minutes and 3.4 billion instructions; 3 billion instructions over 5 minutes is about 10 million instructions per second. this suggests that it uses about a third as much power as emacs and 110 times as much power as ae
10 million interpreted bytecode instructions per second is really pushing the limits of what the zorzpad can do. also its window is by default 684×374, about 33% bigger than the zorzpad's two screens put together, and it doesn't seem to be resizable (which i assume means it's not designed to be able to handle being resized)
— vim power usage —
finally, i did it again with vim in a terminal emulator, with syntax highlighting and manual page lookup, and vim took 680 million instructions, exactly one fifth of what left took. it took me less time, but i don't think vim does any background computation. as with ae, vim's time includes the time to compute minimal screen updates (confirmed with `asciinema rec vimtest`)
aha, i finally figured out that yes, indeed, my elisp code is being compiled to native code and saved as shared library files in /usr/lib/emacs/28.2/native-lisp/28.2-e4556eb6. the functions have names like F76632d63616c6c2d6261636b656e64_vc_call_backend_0@@Base (the hex string decodes to 'vc-call-backend'), and all the calls are indirected through registers, but `objdump -d` can decode it into recognizable amd64 assembly language, with function prologues and epilogues, nop-padding to 8-byte boundaries, tests followed by conditional jumps, that kind of thing. so emacs doesn't really have an excuse for being so slow
i thought that maybe the earlier version of left in c would be more efficient, so i git cloned https://git.sr.ht/~rabbits/left, checked out 4f127602e4e9c27171ef8f6c11f2bc7698c6157c, and built the last c version of the editor. a simple startup and shutdown cost 407 million instructions, and writing the same c program in it again in more or less the same way took 2½ minutes and 1.8 billion instructions. 1.4 billion instructions in 150 seconds are about 9 million instructions per second, which is surprisingly about the same as the version running in uxn. but just leaving it open for 2¼ minutes also used only 424 million instructions, so maybe it makes more sense to compare the 1.4 billion against the 8.8 billion of emacs, the 0.039 billion of ae, the 3.4 billion of uxn left, and the 0.68 billion of vim, since it seems to grow primarily with activity rather than just time open
Someone just told me to have a look at the thread and I'm very happy I did! It's been really good for me to read back you two's exchange. I'm not here to defend uxn or anything like that, I was only wondering, could you do the same test with uxn11(instead of uxnemu)? I don't personally use uxnemu, I find it's too demanding for my laptop, I would love to have the data for uxn11 in comparison to uxnemu from your system if you can run X11 programs.
oh hi! delighted to hear from you! i hope it's clear i'm not here to attack uxn either
let's see about uxn11... initial signs are good, 175711 instructions to start up and shutdown rather than hundreds of millions. but after using it to go through the whole left editing session, it's 175645 instructions; that suggests the real work is being done in a child process
yeah, strace confirms there's a child process being spawned off. i'll have to see if i can disable that to get more accurate measurements with valgrind, later today
i'm interested to hear your thoughts about how left and other uxn apps might be made usable under the constraints of the zorzpad display hardware: two 400×240 screens, each 35×58mm, with only black and white (no greyscale). left itself seems like it might be relatively easy to run usably on one of them?
Of course, I've really enjoyed your exploration of uxn, and all your comments are accurate.
Uxn11 has a special device to spawn linux processes(like playing a mp3 on disk with aplay), it can be disabled in devices/console, I wonder why it would be acting up, left doesn't do any request to the console's special ports I think, I'll have to double-check.
Left is pretty heavy, it does A LOT, it's definitely not a simple text editor, not only does it do a lot of work locating symbols and navigation strings, the front-end uses a proportional font and does a lot of positioning for it.
I don't think left would be a good candidate for zorzpad, I can imagine a simpler IDE that uses fixed width font, and doesn't try to make any sense of the tal code, syntax highlight, and doesn't support utf-8 glyphs - But Left is not it.
I've ported the classic macintosh notepad application, which I now use daily for taking notes, it has proportional font support, but is monochrome and has a small window, doesn't do anything fancy. Expanding this into a proper text editor with scrolling might be more realistic than trying to fit left to a monochrome screen.
But really, I think it'd be better to write a whole new text editor specifically for zorzpad, left has a very specific goal in mind, and I can imagine one like it designed specifically for the zorzpad. Writing text editors, or block editors, is an art that shouldn't be forgotten, anyone who's got an opportunity to write one, should do it.
I've read as much of your works as I could find, you've been a big influence on the work I do nowadays, it's an honor to read a message from you. Thank you.
aw, thanks, i'm really flattered—i admire your work a lot, and i'm glad to hear i've contributed somewhat to it. you're very welcome
as for proportional fonts, a few years back i wrote a microbenchmark for proportional font layout with word wrap to see how cheap i could get it. it's http://canonical.org/~kragen/sw/dev3/propfont.c (maybe make sure you redirect stdout to a file if you run it), and it uses the n×6 font i designed for http://canonical.org/~kragen/bible-columns, derived from janne kujala's 4×6 font. the notes in the comments say that on my old laptop it ran at 70 megabytes per second, which is probably about 60 instructions per byte. if you redrew 32 lines of text with 32 characters each (about what you could fit on the zorzpad's two screens) it would be about 60000 instructions, about a microsecond at the apollo3's theoretical 60 dmips; that might be a reasonable thing to do after each keystroke. and there are some notes in there about how you could optimize it more
running the same benchmark on my current laptop i get numbers from 64 to 68 megabytes per second, but for comparability with the editors, i should say that cachegrind measures 1,482,006,535 instructions for running 100000 iterations of rendering 126 bytes, which works out to about 118 instructions per byte and 120 000 instructions for the screen-redrawing example
(of course that computation takes about 2 microseconds, while actually updating the screen will take 50000 microseconds according to the datasheet, 16700 microseconds according to some other people's experimental results, so the computation may not be the tall pole in the tent here)
propfont.c might get a lot faster if its pixels were bits instead of bytes, especially on something like the thumb-2 apollo3, which has bitfield update instructions. then again, it would get slower with a larger pixel font size
I'm currently away from reliable network and I can't manage to load the bible-columns image, I will try once we return to Canada.
I've added proportional fonts to Left because, after writing loads of THINK Pascal, I felt like I needed it, and I still enjoy this a lot more than fixed-width, but it annoys a lot of people, even even I like to write space-padded files from time to time so Left has to support both.
If I was golfing an editor(one that I would actually have to use each day), I think I would make a block editor, fixed-width, and by its block-paginated design would do away with gap buffers altogether by needed to move very little data around.
An early version of Left tried something different where it would only display one @symbol at a time. In Uxn, "scope" or objects are defined with @label, and methods by &method, making it so it's well usable if you have a list of objects, and methods a-la Smalltalk's System Navigator. During editing, you never insert into the text file, but in a gap buffer made of only the object or method being edited. I always found that design pretty neat, the main downsize is when you want to add non-source files, then you have to contend with editing to a single large buffer.
The font encoding for Left is https://wiki.xxiivv.com/site/ufx_format.html, which allows me to calculate the sprite address glyph width sort-of quickly, the issue is that varvara has no blitter, everything is drawn as tiles so it creates a bit of overdraw for each glyph. If you have a blitter, then I think there would be pretty smart encodings for drawing proportional fonts that would make them more efficient.
yup! I'll never change it, most the projects I've made use it in some way. There's little variance in how I implement the drawing, but the specs are good enough for anything I might want to do with fonts :)
usually the issue with that image is not its byte size (it's 4 megs) but its ram usage (about 100 megapixels)
my best thought on how to handle proportional fonts is a slight variant on nick gravgaard's 'elastic tabstops'; i think we should use tabs to separate columns and \v and \f as delimiters to nest tables, so that we can do pre-css-style html nested table layout in plain ascii text files
with respect to moving very little data around, the bulk of the data that needs to be moved is pixels to display; a character is 1–3 bytes in the editor buffer, but in a 16-pixel-tall font, it averages about 128 pixels. that's 512 bytes in 32-bit rgba or 16 bytes in 1 bit per pixel. on the zorzpad, without interpretation overhead, i think i can roughly guesstimate about 100 instructions, and at a nominal 25 picojoules per instruction, that's about 2.5 nanojoules
i still haven't measured, but if i recall correctly, the ls027b7dh01 memory lcd datasheet says that maintaining the display statically costs about 50 microwatts, while inverting all the pixels once a second raises that to 175 microwatts, which is to say 125 microjoules to invert all 96000 pixels, which works out to 1.3 nanojoules per pixel. so updating those 128 or so pixels will cost about 170 nanojoules, which is a lot more than 2.5 nanojoules
i'm unclear on how much of this 125 microjoules is due to simply shifting in new lines of 400 pixels, regardless of their contents, and how much is due to the actual inversion of color of the pixels; after all, they invert polarity several times a second just to maintain the display without damaging it. if the expensive part is shifting in the pixels, then to display a single new 16-pixel-high letterform, we're updating 16 × 400 = 6400 pixels rather than 128, costing 8300 nanojoules. typing at 60 words per minute (5 letters plus a space) would then dissipate another 50 microwatts if you fully updated the display after every letter. when you're typing into previously empty space,
you could maybe update only a third of the pixel rows after each letter to cut down on this, completing the delayed updates after a few hundred milliseconds if you stop typing
it might turn out that the most important thing to do to minimize power usage is to minimize the amount of the screen that gets regularly updated, and last night i was thinking about how in bsd talk and on mit its, text would wrap around from the bottom of the window back to the top rather than scrolling. emacs does a less aggressive version of this where it does scroll, but it scrolls by half a screenful at once when the cursor goes off the screen, so that updates to the whole screen only happen every ten or twenty lines of movement or typing, rather than after every line. this was important on terminals like the adm3a that didn't have escape sequences to insert and delete lines
narrowing the view to a single symbol might avoid the need to spend energy drawing lines of text from other symbols, perhaps repeatedly as line insertion or deletion moves them around on the screen
the adm3a also didn't have escape sequences to insert or delete characters, and vi (strongly influenced by the adm3a) does a peculiar thing with its 'c' change command; if you say, for example, ct. to change all the text before the following period, it doesn't erase that text immediately, but puts a $ at the end of it so that you know what you're changing. this avoids the need to shift the rest of the line right after each character. (vim does not do this.) if the big energy suck is actually changing pixels, as i think it is on e-ink, rather than shifting them through a shift register, that might be a useful strategy to adopt: open up space to type into, with some kind of indicator that it doesn't contain spaces or any other characters. like a visible manifestation of a buffer gap, but measured in pixels rather than bytes
i don't think there's a reasonable way to hack a hardware blitter into the zorzpad, but doing it in software is very reasonable
with respect to software that doesn't need to be efficient, sure, there are lots of things that are usable without being efficient. but in rek and devine's notes on working offgrid, which are part of the background motivation for uxn/varvara, they say:
> Computers are generally power-sucking vampires. Choosing different
software, operating systems, or working from machines with a lower
draw (ARM) or even throttling the CPU, are some of the many things
we do to lower our power requirements. The way that software is
built has a substantial impact on the power consumption of a system,
it is shocking how cpu-intensive modern programs can be.
so uxn/varvara is not intended for software that doesn't need to be efficient, very much the contrary.
so, from my point of view, the fact that logic-intensive programs running in uxn consume 20× more energy than they need to is basically a mistake; rek and devine made choices in its design that keep it from meeting their own goals. which isn't to say it's valueless, just that it's possible to do 20× better
what i mean by 'logic-intensive' is programs that spend most of their time running uxn code doing some kind of user-defined computation, like compilation, npc pathfinding, or numerical integration. if your program spends most of its cpu blitting sprites onto the screen, well, that's taken care of by the varvara code, and there's nothing in the varvara definition that requires that to be inefficient. but uxn itself is very hard to implement efficiently, which creates pressure to push more complexity into varvara, and varvara has ended up fairly complex and therefore more difficult than necessary to implement at all. and i think that's another failure of uxn/varvara to fulfill its own ideals. or anyway it does much worse according to its own ideals than a better design would
how much complexity does varvara impose on you? in https://news.ycombinator.com/item?id=32219386 i said the uxn/varvara implementation for the nintendo ds is 5200 lines of c, so roughly speaking it's about 20× more complexity than uxn itself
and that's what i mean by 'and it's not that simple'. in the comment i linked above, i pointed out that chifir (the first archival virtual machine good enough to criticize, which is a somewhat different purpose) took me 75 lines of c to implement, and adding graphical output to it with yeso required another 30 lines of code. you might reasonably wonder how much complexity i'm sweeping under the carpet of yeso, since surely some of those 5200 lines of code in the uxn/varvara implementation for the nintendo ds are imposed by the ds platform and not by varvara. the parts of yeso used by my chifir implementation compiled for x-windows are yeso.h, yeso-xlib.c, and yeso-pic.c, which total 518 lines of code according to either sloccount or cloc.
still, uxn, even with varvara, is much better than anything else out there; like i said, it's the first effort in this direction that's good enough to criticize
i don't understand what you mean about octo vs. decker but possibly that's because i haven't tried to use either one
you definitely can't deal with a 4×6 font on the ls027b7dh01 display i'm using. it's 35mm tall, including the pixel-less borders, and 240 pixels tall. so 6 pixels is 0.88 mm, or a 2.5 point font. even young people and nearsighted people typically need a magnifier to read a 2.5 point font.
you said:
> I think it's the primary use case like Java Applets and Flash preceded JS as iterations of sorta-portable and low efficiency tools which make up for it with sheer volume of high-familiarity material.
i wasn't able to parse this sentence. could you unpack it a bit?
you said:
> If you'd like, I can provide more specific suggestions.
yes, please! even if our priorities aren't exactly aligned i can surely learn a lot from you