I know. It used to be common to build the 2D UIs of games in Flash. The Adobe design tools were OK, and there are third-party libraries for running Flash. There's even a Flash runtime in Rust.
Yes. I was one such designer. You could do a UI in Flash and then export all its animations to cryengine. The actual design tools in Flash were mediocre, but some animators really loved them. For designers, it played well with Illustrator... if an illustrator vector file updated you could re-import it and reuse the animation. You'd just draw your UI gizmos / HUD / whatever, assign anything to be a sprite and hook it from code. Any complex vector animations could be created visually and then triggered. It even had full 3D capabilities, worked really well on mobile chipsets and could leverage a GPU, although in the UI-as-export for games context it would run on the CPU.
Sure, but I don't recall many programs / games / thick clients that would reorganize the layout of the application (adding or removing elements, reorganizing rows as communs, etc...) At least not 90s era windows programs, and certainly not dos.
Résolution was taken into account in games, to compute a scaling factor, maybe add some screen bars, and that was nearly all - fundamentally, because hardly any program was going to be used in a small, top to bottom screen at all.
So html was just the first one to have to do it pervasively because of desktop + mobile web browsing of the same source - but it could have happened before.
> Résolution was taken into account in games, to compute a scaling factor, maybe add some screen bars, and that was nearly all - fundamentally, because hardly any program was going to be used in a small, top to bottom screen at all.
Are you comparing applications or games? Because games don't have a fluid layout anyway[1], whether HTML5 or native application.
> So html was just the first one to have to do it pervasively because of desktop + mobile web browsing of the same source - but it could have happened before.
No. All the GUI toolkits I have used to produce native GUI applications since at least 1998 had support for ensuring that elements were still accessible after resizing[2]. Even in 1998, Delphi's "anchor" worked better than anything in HTML today.
HTML was the last GUI development system to get this sort of thing.
[1] You can choose one of a dozen preset WIDTHxHEIGHT resolutions. You couldn't narrow the window and still expect to see all the game elements.
[2] I don't remember many native GUI applications that were broken when you halved the width, or halved the height.
Hmmm, HTML has had layout behaviour that works at all viewports since its inception -- it's just that web developers have often implemented markup that that is inflexible. Just as you could make static/inflexible layout in any of the tools you mentioned, so could you do so with HTML. The regular flow of content on a web page has always been basically equivalent to "word wrap" (I don't know the proper term if there is one). The only noteworthy feature specifically regarding viewport resizing for web pages is "media queries" which wasn't necessary for ensuring a flexible layout.
It was always technically possible to have a flexible layout, it just didn't become popular/"standard" until around when mobile started being a popular medium via which to view that content. The prevalence "best viewed at 1024x768 or higher" was just a symptom of inflexible design & implementation, not an actual technical limitation of HTML.
P.S. I will definitely agree that laying out content in HTML has always been awkward, tedious, and inconsistent across browsers, hahah :)
Do you remember a popular QT app that would turn a row of elements into a column of elements when shrinking a window, though ?
Was it also common to make sure your Qt app was working fine on the small portrait resolution of the mobile computers that were going to be invented any decade now ?
Flex is a layout system (a fairly good one, that I wish had existed on the web since the start. Rows, columns, wrap. Good.)
; responsive is a design considération (what do I as a human being decide should be visible when the screen is just too small to show everything.)
My claim is that "responsive" as a design issue is deeply linked to mobile computers, so it's not a surprise that it was not a big conxern before.
WriteNow https://en.wikipedia.org/wiki/WriteNow was a word processor available for the Macintosh from 1985. Its windows could be resized from full screen (512x342!) down to so small that only part of one character was visible. There was zero reason to support windows so small other than programmer amusement. The interesting thing was that the scrollbars were perfectly usable at any window size, and changed not just their size but their layout to do so. From memory: at a normal size window, the vertical scrollbar looked normal: something like this:
^
|
|
v
At a smaller window size, the scrollbar got narrower, with smaller arrows. This made some sense.
At a still smaller size, the scrollbar shrank again, and the arrows changed shape to be smaller. At this point the window might be displaying two lines of text and the scrollbar was only 20 pixels tall. This was a pointless window size.
At a still smaller size, the scrolling area itself would disappear because the arrows began to overlap. This was a ludicrous window size.
At a still smaller size, the vertical scrollbar changed to horizontal, with two tiny arrows only a few pixels tall, and a teeny scrollbar between them. This was completely pointless.
In the 90s we went from 640x480 to 800x600 to 1024x768 in fairly rapid succession, and Windows could be running in any of those modes depending on the user's hardware. Many apps were able to run in any of those responsively, though some would refuse to run without a minimum level.
Removing big nav bars and moving their contents into drop-down menus is pretty much the standard way of handling resizes across the web. Same goes for multi-column layouts. It would be a horrible idea to keep those things on a phone, just like it's a horrible idea to scale up a phone interface to a desktop browser window.
Having distinct layouts for mobile devices can be a good idea.
Having the interface add/remove elements for this dynamically is (a) obviously not necessary, as my desktop doesn't suddenly turn into a phone while I am browsing, and (b) not a good idea.
It seems like a neat generalisation, but it really isn't.
Just to point out why your statement is misinformed, your tablet turns from a desktop to a phone when you rotate it.
I've been designing web frameworks from the ground up since 1995. Two sizes is not how it works. You don't read the initial size of the browser window, or parse the device data to see if it's mobile, and decide from there which of two layouts to draw (mobile or desktop). You read it on the fly and redraw as necessary. All parts of the interface and the content working together.
Bootstrap, for instance, has xs, sm md lg xl classes and mixins. Often you don't hide interface elements but simply resize them. Font leading and size may change 20 times subtly as you resize a window.
If you don't believe me, load any major website in a desktop browser and play with resizing the window from narrow to wide.
If done wrong, it's bad because you're on a desktop and seeing a mobile interface when you shouldn't. The Charles Schwab website is an example of that. If done right, it should feel natural on any size screen, and you should have visual cues as you resize. Again, in many cases it's not about hiding UI elements completely, but about simplifying them so they degrade from word+icon, to smaller font, to just icon, to drop menu. Although it's hardly an example of cutting edge design, the overall AWS console handles this pretty well, shuffling what you need into smaller compartments. (Some individual services do better than others). If you open 4 of them on a desktop it's still fairly easy to find what you need, and hasn't yet degraded all the way to a mobile interface. In my app designs there are at least three levels, usually four.
When I've looked in the past, Interface Builder was (as far as I could tell) a first-step-and-don't-return thing. Meaning you could lay out an interface visually, but when you want to start coding it becomes a code definition of that interface, and once you start updating that, you can't (easily) return to the visual interface-definition interface. Has that changed, or was I wrong?
Not tricky if you spend the time. I'm always speechless when people claim that lack of responsive screen adjustment was the death knell of Flash/AS3. Any language or stack is going to suck on some screens unless you test it on those screens and write code accordingly. Nothing had better performance in this than Flash, up until Google decided to stop letting Chrome fire off new window boundaries on resize until the click/drag was let up. I wrote practically a whole operating system with window managers and apps in a resizable browser window in 2008 using that tool in under 300k. And large portions of the graphical elements were done through the GUI. Do you realize how much more boring and time consuming it is to write every media size CSS subclass and have to test it compared with just drawing multiple sizes of things and changing their positions and filters?
No, but the sprite/graphic implementation was much less tedious when you could use a good GUI to set up all the small bits instead of relying on a mental map of flex/grid/css to resolve every little padding or margin.
I often write 99% of an app now and spend days dealing with single-pixel discrepancies on screens whereas before, the layouts would likely be implemented before the code to swap between them even started being written.
Not so -- there is a starter plan at significantly reduced cost that has deployment restrictions, but with the standard plan you build your own standalones and they work forever.
If you have a WordPress blog, you would be surprised how much you can edit in GUIs. .NET/Delphi/etc. are still around. Heck, you can GUI edit a Swing app right now if you want.
The trouble is that GUI editing is super limited, so it is worth learning to write the code yourself. And frankly, HTML with CSS3 is not that hard to make by hand; organize div and spans and other tags around what you need, then open the page in Chromes dev tools and adjust margins etc., until it looks like what you want, then save the CSS file it generates.
This is much better than fighting with a GUI tool.
It’s better than fighting with a gui tool if you care about gui. I just need something to work and then give it to someone. I still haven’t found anything productive outside something like retool (cannot use cloud for the work I do) that works for me. Most wysiwyg tools now use drag and drop to containers that are responsive etc which I find impossible and tedious to use. I want to throw stuff on a canvas and hook it up to the backend I made, test it and then throw the code to a frontend colleague. Found nothing usable like Delphi (used it for 10 years in the 90s/00s) but for html for that yet; it’s all tedious, slow and usually incredibly buggy.
I am a senior (30 years+) software engineer and html/css really doesn’t work for me. I can do it but I find it boring and annoying.
Livecode offers exactly what you're describing: GUI editing of a live user interface, with code able to be built-in to every object. https://livecode.com
It's based on HyperCard, so the language isn't to everyone's taste. But it's updated to include color, unicode, more advanced coding techniques, database access, multi-platform and more. But the basic concept -- I want a button here, and a field there, and a slider there -- is still 100% there.
Professionals in every domain like to maintain a certain level of difficulty to keep amateurs out. The tools of a trade can never be too easy, the tax code can never be simplified, legal writing and procedure has to be opaque etc. as professionals subconsciously reject job security threating ideas.
This sabotage is subtle: A wide variety of technical decisions can be made at any time, with complex trade-offs, bounded by what the majority of developers is willing and able to handle. As we move our servers to the cloud to make our lives easier and get admins fired, the exact right level of complexity shows up in AWS and Azure to enable stable and well-paid cloud expert jobs. For that difficulty-goldilocks-zone, Visual Basic was too easy, and things like functional programming are too hard.
This is nonsense. The tax code is complex to close loopholes that smart money-motivated people find in it. Similarly, programs are complex because they solve complex problems. Of course you can write very simple programs, but they simply don't solve the complex problems. None of this is about gatekeeping anyone, all of this is about what value we actually provide.
There are some complex problems out there, but the vast majority of requirements are rather trivial, no? Heavy websites that don't do much, but with enough javascript to harm usability seem to be an issue HN readers agree on. Opening a car glove box is another simple problem that now has complex solutions (https://www.youtube.com/watch?v=lRB0gbYO3fE). Isn't it weird how those things come about? It can't possibly be incompetence, nor would it be outright evil.
I suspect it's a subconscious process where we like to build a framework that appeals to us, where we get to be 'senior' and shape the world the way it is convenient for us. The antidote is a clear commitment to simplicity, to meditate over Picasso's Bull and apply that to our work.
> Professionals in every domain like to maintain a certain level of difficulty to keep amateurs out.
Not all of us. I fervently wish that implementing the various platform-specific accessibility APIs, and the corresponding tricks for canvas-based GUIs in web applications, was easier, so more GUIs would be accessible. I wish I wasn't one of maybe a few hundred people in the world who had substantial experience with the UI Automation API on Windows, for instance. I'm working on an open-source project to try to package up that specialized knowledge in a reusable implementation. But it takes work to make inherently complicated things easier.
Slightly off tangent, maybe... but boy do I miss Macromedia Director. It was a big part on the beginning of my career.
Director allowed quick development of Multimedia catalogs and even games. I wonder nowadays, what has taken its place? Let's say that I want to throw together some media assets, sincronize it's behavior, add some functional logic and package it all as a standalone binary to distribute, what should I use? An Engine like Unreal o other? Is there any authoring tools for that?
This are totally honest questions. I don't know the "state of the art" nowadays and if is even possible with an acceptable learning curve for someone to do that kind of things we use to do with Director
And then there was HyperStudio on the Mac. I'm pretty sure it had native color before HyperCard. Later versions were able to export as html5. Ted Nelson remarked, "HyperStudio was a paradigm challenge of a serious nature"
FWIW, HyperCard never had native color. There were some plug-ins to allow it to render color graphics, which enabled Myst, if I recall correctly, but HyperCard’s native tools and language couldn’t interact with the resulting renders.
Hypercard was the way that I learned visual programming when I was young. For years, I've wondered why nobody had created anything similar. This is a wonderful modernization of the Hypercard idea and a love letter to the original software.
You all probably know this… Contemporary computing behavior is for the cursor to change to a little hand as it rolls over a link or button. This was first introduced by HyperCard, so it is there as an everyday reminder of its legacy.
On the same note, I often wonder why the mouse cursor on so many OSes is identical to the one used on the Macintosh[0]. Did they really all just copy what was in the Macintosh System Software, or was there an earlier pioneer of the typical black arrow with a white outline?
Ah yeah but the "tail" on that arrow is much longer! The one I see in Linux and BSDs appears to be literally pixel-for-pixel identical to the Macintosh one, though I haven't exactly done a closeup comparison.
> Ah yeah but the "tail" on that arrow is much longer! The one I see in Linux and BSDs appears to be literally pixel-for-pixel identical to the Macintosh one, though I haven't exactly done a closeup comparison.
They are quite similar, but there's more pixel differences between current Linux/BSD pointers and the Macintosh pointer than there are between the Macintosh pointer than the Alto one (roughly 1 pixel difference).
IOW, Macintosh is closer to the Alto one than it is to the Linux/BSD ones.
To some degree there is only one solution to this problem of arrow cursor design. Vertices, horizontals and 45 degree diagonals render effectively, even on low resolution screens. The arrow head of a mouse cursor consists of: 1 vertical, 1 horizontal, 1 leftward 45 degree diagonal and 1 rightward 45 degree diagonal.
HyperCard absolutely had hyperlinks, and later versions had them in recognizable form before the web did — you could link from an arbitrary range of text in a text field.
You could link to content _in another Stack_ (file) , for that matter. HyperCard absolutely had hyperlinks :) They were just called Buttons, but you could not only directly link to something, but also attach scripting to do additional stuff, pretty much exactly like the <a> tag we use and attach click events to today. I mean, I guess it wasn't like a piece of inline text, but you would just overlay a transparent button. That's a method I used, but there were probably better ways to link text. It was a long time ago, tough to even remember haha
I don't know where you lived but where I did internet adoption was slow, and most computers operated in isolation for years before everyone was connected. Floppy disks and CD-ROM reigned supreme for quite a while. Long enough for a new version of HyperCard to have come out with hyperlink support if that had been a priority.
Yes, good points, but this just shows it wasn't only one thing that made HTML win over hypercard. It did multiple things right, and also was in the right place and the right time to get started. CERN actually had a network that was also used by scientists to exchange documents. Hyperlinks to documents on other computers made sense there. Conquering the desktop in general took a little longer.
the way I remember it, hypercard was hosted locally, as in, you'd get it on a floppy and then copy it to your computer. so linking to another computer didn't really make sense in that pre-Internet era
Certainly, I wouldn't want spammers to be able to insert links into my text.
However, if there was an option in my browser to see who had linked to the content, sorted by some reasonable measure, such as page-rank, don't you think that would be a good thing for the web?
My recollection is the original graphical browsers were all editors. I’m pretty sure for example ViolaWWW was an editor, and directly referenced HyperCard too.
[1] https://beyondloom.com/decker/tour.html