For my whole life I've been "drawing" real GUIs and writing the logic code, not GUI code then. It started with VisualBasic 1.0 for DOS, then there were different versions of VisualBasic for Windows, then Borland C++ Builder and Delphi (I was using the former), then NetBeans Swing designer and WinForms designer in VisualStudio. And now building a GUI is such a problem that a huge number of almost-useless (I really prefer pen and paper over drawing with a mouse) "mockup tools" has emerged and we have to use artificial intelligence to produce the actual UI code or just code it manually...
It really is disappointing how modern web GUI environments are harder to use than what was around 20+ years ago.
Yes. We had drag and drop UIs decades ago. Now you can't even get a decent GUI program for laying out a web page. CSS/Javascript got so messy that even Dreamweaver became useless.
Part of the problem is that ad code requires a messy environment, so that ad blockers and click generators have a hard time. Google, by policy, does not permit you to put their ads in a iframe, where they belong. You can't even put Google Hostile Code Loader ("tag manager") in an iframe sandbox.
Part of the problem is that ad code requires a messy environment, so that ad blockers and click generators have a hard time. Google, by policy, does not permit you to put their ads in a iframe, where they belong. You can't even put Google Hostile Code Loader ("tag manager") in an iframe sandbox.
I believe the proper term for this is "Well there's your problem.". That sort of antisocial behavior is what caused the issue in the first place and rather than solve this by being civil enough to the user that they don't try to block everything they double down on the untrustworthiness and act /even more/ malicious.
Computers operated within a lot more limited constructs back then. UI windows didn't resize, or it was reasonable to expect that they're fixed. How did those drag and drop tools back then handle creating a UI for screens ranging from 400pt wide to 2560pt wide?
> How did those drag and drop tools back then handle creating a UI for screens ranging from 400pt wide to 2560pt wide?
For simpler forms you'd just set anchoring properties of the widgets in question (akRight/akBottom in Borland's VCL or whatever its counterpart is called in WinForms or what was its predecessor). Nowadays it's even easier with things like, say, GTK's HBox/VBox.
> Computers operated within a lot more limited constructs back then.
Resource limits didn't stop web browsers of the era from rendering complicated tables in a variety of sizes.
In WinForms, it's still called Anchor (it really did take a lot from VCL), but since enums are scoped in .NET, you have Anchor.Left, Anchor.Top etc.
WinForms also had Dock (Left, Right, Top, Bottom, or Fill) which basically forced the control against that edge of its parent, taking up full length and/or height accordingly in one dimension, and using the explicitly specified size in the other. With multiple controls with Dock, they'd gradually fill the available space in Z-index order, stacking up against each other; then whatever control had Dock=Fill, would fill the remaining space.
So yeah, resizable windows were common, and easy to deal with. The real problem is dynamically sized controls themselves, e.g due to DPI changes, or because the control contains dynamic content of unknown size. With those, anchors no longer work well, and you need some kind of box and grid layouts. Which are available - but it's kinda hard to edit them visually in a way that's more convenient than just writing the corresponding code or markup.
The closest I've seen to visually editing UI that can accommodate dynamically sized controls was actually in NetBeans, which had its own layout manager for Swing. That thing allowed you to anchor controls relative to other controls, not just to the parent container. Thus, as controls resized according to their context, other controls would get moved as needed:
Still, you needed to be very careful in the UI form editor, to make sure that controls are anchored against other controls in ways that make sense.
Fundamentally, I think the biggest problem with UI designers today is that they can't really be WYSIWYG, the way they used to be in the RAD era. The DPI of the user machine might be different, fonts might be different, on some platforms (Linux) themes can be different in ways that drastically affect dimensions etc.
> Fundamentally, I think the biggest problem with UI designers today is that they can't really be WYSIWYG, the way they used to be in the RAD era. The DPI of the user machine might be different, fonts might be different, on some platforms (Linux) themes can be different in ways that drastically affect dimensions etc.
Last time I checked the WPF designer, it used canvas by default - which allows you to put elements exactly where you want by simple drag and drop, like the old times in VB6 or Delphi; but then nothing actually reflows or resizes etc at runtime.
If you choose to use layouts, it can display them, but editing it with a mouse is no longer convenient. It's easier to just drop into XAML and hack on it there. On every project I worked on that used WPF (which is quite a few by now), nobody on the team actually used the UI designer, and everybody would switch the default mode for .xaml files in VS to open the markup directly, rather than the usual split view.
Using anchors and docking was extremely intuitive and easy in WinForms, I miss these heavily. I'm trying to build an UI with Qt Designer right now and using layouts instead is quite a pain when building something sophisticated. Perhaps everything can be done with them but becoming fluent in their behavior and tweaking is going to take time.
There are 2 major strategies for responding to such a difference:
1. Vector scaling. It will make the difference irrelevant and the resulting UI will look approximately the same on whatever a resolution.
2. Capacity scaling. I.e. keep the same font letting more information to fit in the control without scrolling.
It may be a little bit hard to formalize but these two can be combined intelligently. It is also important to know which one the user prefers (e.g. I mostly prefer the second while many prefer the first).
Note I mentioned display _points_ rather than display _pixels_ - how do you create a UI that scales from a phone to a 27" monitor. This is more than just scaling up assets and turning on word wrap.
An early desktop publishing program, Ventura Publisher (v3 of 1990) could easily reflow complex document layouts and allowed alternate stylesheets to be applied to the same text. This is not exactly UI, but the layout was more sophisticated than what HTML can do. The idea of same content and different rendering is certainly not new.
It's a real shame. If you start with a blank file, HTML/CSS/Javascript are so much nicer now, if you don't need anything cutting edge browsers are generally great, too. I think our technology is not so much the bottleneck as how it's used.
We've built an explicitly Visual Basic-like development environment for the web, which I think fits your bill: https://anvil.works.
It's got a drag'n'drop UI creator, you use Python to build the front-end and the back-end (with proper autocomplete, VB-style), and it even has a built-in database if you need one.
"Drop a title here" wtf what does the app doesn't create one for me when I double click (maybe ask me for confirmation if you think user intent is not clear)
Don't create your own icons, use already known icons, Ab and Ab-but-with-underline doesn't clearly mean "Text" and "Link" but the word "Text" and the famous anchor icon do.
Seems really cool but the selection of controls looks too humble (no TreeView? I'm choosing a trchnology to build an app on right now and I need a tree) - IMHO this is a severe limitation for the platform usefulness, I really hope more controls are going to be introduced.
Also as far as I can understand it won't let you export apps the way one could run them on their own outside of your servers - this limits the usage too.
quick glance, looks interesting.. but without "won't let you export apps " - I would never buy in.. if you can export, then it suffers from the same lack of easy to understand that webflow has suffered from it's initial launch.
I liked the tree view that netobjects fusion had years ago, it makes bigger sites easier to work with.
Still trying to carve out time to try pinegrow to see if it's got enough drag and drop and resize to make me no longer miss netobjest fusion.
just curious - why would web assembly make it easier for those tools to reappear?
I used to use hose 90s GUI tools - VB's, VC's, Symantec's and Borland's java gui tools. Although they did work well for fixed UIs (absolute positioning) -- they were rather hard to get non-fixed uis working.
IMHO Bootstrap's grid system was a real leap in this regard and (to me) still a pleasure to work with.
The main reason WASM and whatnot might make it easier is that instead of depending on thousands of Javascript libraries the way that so many modern web apps do perhaps a more controlled environment might include a better set of standard libraries and controls.
Also Javascript has evolved at a lightning pace which is remarkable but also makes any complex tooling for it tricky. 10 years ago jquery was the main thing, today React,Vue,Angular and so many other things are front end tools of choice.
The idea would be that WASM will facilitate a particular language with a good standard library would be used to create an easy to use GUI drawing and then coding setup.
But yes, it wouldn't support every language that could compile to WASM and you're right that would be even harder than creating a good tool for Javascript.
Such "easy to use GUI drawing libraries" would still need to be written in javascript. If you want actual things to appear in the browser you still need to interact with the DOM, which is actually harder with WASM, not easier.
And while it's technically possible to say, just compile QT to wasm and give it a CANVAS tag as a dumb drawing surface, you've just broken accessibility, and that's a deal breaker.
Then we need APIs that would allow GUI frameworks that render onto <canvas> to support accessibility. Such APIs have existed on desktop platforms for decades now, and popular frameworks like Qt use them, even when they draw all the widgets themselves.
And while accessibility is a big issue, I must sadly acknowledge that I am yet to have a project require compliance and validate its implementation, as such it never gets done.
- You can use any existing external JS code by coding an external interface to the external JS code (FFI), so even external JS code references will be type-checked by the compiler:
- The code editor is rudimentary and requires some more polish (code-completion, etc.)
- Debugging isn't supported via the IDE, yet
- There are still a few missing controls like a treeview and charts, but you can interface with JS products like HighCharts without issue
- The compiler still needs some work in the areas of interfaces, generics, and set support
We have a new Elevate Web Builder 3 coming out soon that has a new IDE and web/application server with built-in TLS, authentication, session management, role-based access control, database API, remote application deployment, and event logging. You can, effectively, manage and monitor any EWB 3 web server remotely from within the new IDE:
Initially EWB 3's web server will be available for Windows only, but we will be offering a Linux/Mac daemon version in early 2019.
The ultimate goal for the product is to provide a single-language solution to both front-end and back-end applications with one-click application deployment and deployment of server instances.
(Sorry for the "advertisement" - I just see this sentiment come up a lot here, and I think it's important for people to know that that there are companies working on solutions)
If you are a C++ user you might want to check out QtCreator. It makes it as easy to create GUIs as Visual Basic did back in the day, and it's free (both as in price and freedom).
Thanks. I used to be a C++ user when there were no better languages for rapid GUI application development around (C++Builder/Delphi were the best). I've switched to C# with WinForms many years ago. Now I'm going to try Qt5 with Python (just because there is no WinForms designer for Linux, nevertheless Qt Creator doesn't feel as intuitive as C++Builder, VisualBasic and WinForms designer did though I believe it can do the job) but I'm certainly interested in the web GUI world as writing a single app that is going to run on every computer (smartphones included) seamlessly via a web browser feels extremely appealing.
But they feel really obsolete. I don't feel like coding C++ or Pascal when there are modern languages like Python, Clojure, C#, Rust etc and intelligent code editors like Idea/PyCharm that are a lot less boring and more relevant. Sure, old-style visual RAD is still here but it has been forgotten behind as the programming languages progress skyrocketed.
Do you think they are exciting, and Pascal and C++ are not?
Python: slow, useless for threaded programming, full of obvious mistakes (anybody likes one-line lambdas?)
Clojure: a flaccid, viagra-less Lisp.
C#: basically same as Java, the yardstick on "boring" languages
How can they be exciting?!
However, I agree that Rust is exciting, very exciting, because it has fearless concurrency, zero-cost abstractions, move semantics, trait-based generics and efficient C bindings.
Agreed. Sometimes I could use a simple website and compared to Winforms or VB it's really hard these days.
I think if someone finds the right sweet spot between extensibility and ease-of-use for web apps they could make a killing. Something similar to what VB did for the desktop.
Maybe I am not a UI designer but I much prefer rule based approaches than drawing.
Rule based means I tell use instruction like "this button must be on top on that button horizontally centered", "this label must fit that text", "this image must be between this and that", etc... and let the layout engine deal with it. UIs are usually not paintings, window sizes vary, text length changes with localization, decorations change depending on the environment, etc... Approaching a UI like a canvas will certainly yield good results on the designer machine, but will look out of place everywhere else, if it is usable at all.
I think it is the basis or what they call "responsive web design".
This is basically how designing ios/osx apps in xCode works. You set a bunch of constraints that are relative to one another. You can also specify what type of screen those constraints will work on (like iphone vs ipad screens) and portrait / landscape.
I am not a UI designer but I much prefer rule based approaches than drawing.
CSS isn't even rule based. If it was a constraint system, like "A must be to the left of B", and "C and D must have the same height", extended with "A and B must be at least this big, and drop C if it won't fit", it would be rule based.
This really depends on how the UI designer does layout.
Nothing says you need to allow users to do absolute positioning vs rule based when someone drags stuff around. You can help this along with multiple common views at different resolutions, so users don’t try and force a pixel perfect version.
It's funny how web development still doesn't offer the ease and productivity of early 2000s RAD environments.
Even this software is actually not simpler, on the contrary: It is far more complex, because you don't know how to create a given widget [assuming you already learned what widgets there are, because the software doesn't tell you]. You have to learn what the software recognizes and how you need to draw it in multiple strokes. Since recognition is ML-based, it is difficult to tweak and a black box to both user and developer ("why doesn't it recognize this...?").
Contrast with 90s form designers with a simple drag-and-drop palette. (They didn't have layouts just yet, that came a bit later). You can immediately see what widgets are offered and to instantiate them you simply drag them from their "reservoir" to the active area. Simplicity itself.
Early 2000s RAD environments had a much simpler model to work with, that didn't accommodate things like changing the size of widget's contents well. If all you needed was a button of a fixed size, that would always stay in e.g. the bottom right corner of the window, they could do that. If you wanted a button that would automatically resize as its label grew longer, some of them could do that too (pretty much every framework could do it for labels, but many couldn't for other widgets).
But the moment you wanted something like: there's two buttons, one following the other in the bottom right corner of the window, with a certain fixed spacing between them, but otherwise dynamically sized to content, it all broke down. And this just happens to be one of the most simple scenarios, just a basic dialog box with "OK" and "Cancel"!
How did it work in practice? We just made widgets "wide enough" to fit anything that could conceivably be thrown at them. If later that assumption was proven wrong - e.g. because translators came up with a very long string for the label - then the developers would have to go back and redo the UI.
None of the issues you're describing really still exist now in the majority of those kinds of tools, though.
Keep in mind it was usually always the case you could set widget sizes (or do anything else you wanted) "in code", also.
It's not as though you were ever forced to always use the visual designer for absolutely everything.
In general, there are no significant differences whatsoever between the way something like React actually works and the way something like WinForms works.
Yes, but those layout managers were also incompatible with UI designers, generally speaking.
(WinForms designer technically supports them. But it's less "drag and drop", and more like "drag and ... um, what the hell is this thing doing there now?").
For me it was always "drag and drop" + setting the respective properties, not sure what the problem is, other than not bothering to learn how to use them.
The only problem is how buggy VS designer in some releases tends to be, forcing to restart VS from time to time.
But that affects all kind of stuff, including apps not using layout managers.
And regarding Motif, I surely recall the GUI designers being relatively good.
Likewise with Java Swing and designers like Netbeans Matisse.
> Likewise with Java Swing and designers like Netbeans Matisse.
Matisse is the only UI designer that I know that does true drag and drop (letting you position widgets exactly where you want them) while also producing flexible layout. And IIRC they had to write a custom Swing layout manager for that.
Sun prototyped it on Netbeans, made it open source and then when everyone was happy, it became yet another layout manager available in any Java compliant platform.
That is the whole point of a layout manager engine, they are extensible.
Which is why project Houdini is having a layout engine APIs as well.
This is not the sort of repo I would present to the public.
* There are no instructions on how to actually run the thing.
* There is no requirements.txt or similar, so I have no idea which version of dependencies I'd need.
* The repository is strewn with unnecessary files (.pyc/.ds_store/.so...), random-looking images with names like "plswork.png", a HTML file from some "starter kit"...
* I can't seem to find the React frontend that is mentioned in the readme -- on the other hand, it looks like `server2.py` is looking for them outside the repository (`".././reactExperiments"`).
Hello ! Thank you for your comments, it is true that this repository is not in ideal shape. I shall work on making this much more user friendly by tomorrow. I had no idea that this would be seen by this many people. The react frontend is currently not in this repo, I shall add that along with documentation soon :) Thank you for your comments and your review.
It’s not that the code is not production ready. It’s more about first impressions. There are some really fundamental issues with the presentation. The first thing you do when starting a new git repo is create your .gitignore.
I’m also not a Pythonista and I’ve only been working with Python for about a year, but including required packages in the requirements.txt is like Python 102.
I think it's probably best to interpret it as something like a blog post presenting an idea than an actual usable project, and maybe in its current state it should have just been a blog post.
Certainly there's a lot to be improved in terms of git hygiene and publishing an easy-to-try-out project, but it seems a bit excessive to say that the author shouldn't have posted it at all.
I agree with the GP's points, if not its tone. Some basic instructions would really help anyone try this out, and the creator's the best person to write them down.
And it seems the creator plans to do just that. So, kudos.
Hello, thank you for your interest ! I have fixed the repo structure and added instructions on how to build and run. Hopefully that is fine, if not please tell me about it and I shall update it as soon as possible.
Thank you for fixing, I think it's close. Now, flask says "index.html" is missing :-) also step 2 should have "pip3 install -r" instead of "python3 -r".
Interesting idea, but I'm not sure I see any advantage over dragging movable/resizable components from a toolbar, but there are several obvious disadvantages.
If I was using a lot of parameterization and abstraction, this approach would be very annoying. I mean, it’s great if I just have to do one UI, not so great if I have to do many of them in slightly different ways.
This might make more sense for a designer, but they shouldn’t be so close to production anyways.
Now that is a super interesting idea. Create a flow scheme too so you can process transitions?
flow:success
f1:s1->f1:s2->f1:s3
flow:error
f1:e1->f1:e2
edit: another thought is that this concept could encourage people in your org who struggle with wireframe technology to express their ideas. Generationally and across culture, smartphone use is now accepted. People also know how to draw on pencil and paper. Now all you are asking them is a final DSL to express their thoughts. Lower barrier?
edit2: there is also something to be said for having someone step through their wireframe and flow control by taking pictures. It may take the abstract and create something tangible as they can logically piece their work together with actual pieces of paper?
It sounds like you don't. But many in my org come to me with an idea, and having them think through the logical pieces and build a functional wireframe would help under resourced people like me so much.
Also, many people just think that business logic for sanitization and validation "just happens." The barrier to wireframing, for them, is too high so they don't. But in this idea, I could see someone submitting a wireframe to me and my response being "well what happens when a phone number is international?" I'm educating stakeholders on the functional cost of producing their idea.
This would theoretically create a feedback loop for future ideas and initiatives as now, they've begun to be educated on the process. They have direct experience.
Anyway, anything to lower that barrier in order to partner with and teach my executives and their supporting staff would be a huge win. At least for me.
> many people just think that business logic for sanitization and validation "just happens." The barrier to wireframing, for them, is too high so they don't
Maybe the barrier is where it should be. Or maybe it should be even higher! People who can't understand the logic of an interface have no business creating or suggesting interfaces. An UI is meant to be used, not looked at like a pretty picture in a frame. It should feel good and feel smooth and increase productivity... not look good. Some of the best looking UIs I've ever seem were also the most utterly user-hostile, unituitive and productivity lowering.
Sure, if you can afford to pay someone 500/hour or smth "outrageous" like that (hint: you need a word-class artist, with advanced knowledge of user psychology, that also has the brain of a business logic analist or of programmer involved in product design) you could get something that both looks goorgeous and feels smooth and increases user productivity 10x. But usually you need to make sacrifices, and the ones the user will hate you for are those that make his life harder despite seeming nice and slick at first.
Clarification, I'm talking about a tool like this one that incorporated flow control and process into the wireframes.
> Maybe the barrier is where it should be. Or maybe it should be even higher!
Across the industry people in leadership positions assume that making UI/UX is easy. Those same people are usually the owners or major stakeholders of the project. Any avenue to put more functional ownership back onto that group to empower and educate is a worth endeavor.
I take your point about usability as opposed to visual design, and that many slick-looking UIs actually lower productivity.
But:
In this example, unless not handling international phone numbers leads to failure of the project, that can be handled later, say once the project is approved and time estimation is being done. If I'm building a notes app, and someone is proposing a new sign up form to increase conversions, and it has a phone number field, handling international numbers is the last thing to worry about at this stage (unless international numbers are a significant problem with the old form leading to abandonment).
We shouldn't doom good ideas with irrelevant details, which are absolutely relevant later, but not now. Product development happens in phases of increasing fidelity, and issues need to be brought up at the appropriate time, not too early, not too late.
Imo this is one of those things best left unhandled, eg. "just use a plain mostly unvalidated textfield, and throw and error only when you want to use that data via another system like for a text message campaign". Mostly in real-life if you want to target the entire freaking planet (not just 99% of phone using people, but 99.999%), you'll get to realize that any validation is not enough and that some phone numbers need to contain arbitrary letters and symbols in them (better don't ask... the world is big and weird :P), and that yeah, those numbers will not be procesable by things like Twillio, but human users with local knowledge will know how to actually "dial" them...
But it needs to be a conscious decision, to consciously choose to not-validate and to understand that you give up the ability to 100% target phone numbers for things like 2-factor-auth later on.
Not "forget that phone numbers need to be validated" and then, go and say, "oh, let's do phone-based 2-fa mandatory" or whatever user interaction messup like that.
I agree with your main point about empower people.
It seems that people are coming to you to help estimate how long an idea takes to implement. If that's the case, I agree with everything you've said.
But if they're proposing an idea, say a new sign up form to increase conversions, phone number validation is an irrelevant detail to worry about at this point (unless that was a significant problem with the old sign up form).
> would a tool that educates and puts some of the cost back on the "idea person" or stakeholder be a good thing? I think it would.
So would I, as I mentioned in my reply to you. It's easy to propose ideas without regard to cost like a "minor enhancement" that takes 6 person-months.
You've hit upon one of the fundamental problems of gesture recognition systems: They are not self-revealing, and they don't support reselecting and browsing.
There is no easy way for them to reveal to the user what gestures are possible (short of a showing a palette of commands, including animations of gestures which are directionally sensitive), and no clear and wide separation of distinct gestures, so they're difficult to learn and remember, and their ambiguity leads to a high error rate. And they're not suitable for applications where it's not easy and inconsequential to undo mistakes (like real time games, nuclear power plant control, etc).
For example, handwriting recognition has a hard time distinguishing from "h" and "n" and "u", or "2" and "Z", so systems like Graffiti avoid lower case characters entirely, and force you to write upper case characters in specially contrived distinct non-standard ways, in order to make them distinct from each other (widely separated in gesture space). It's important for there to be a lot of "gesture space" between each symbol, or else gesture recognition has a high error rate.
The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
OLPC Sugar Discussion about Pie Menus: Excerpt About Gesture Space
I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
[...]
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
The thing that this demo shows in minutes could've been typed up in text in seconds. It is not a productive way to do things. There's a reason that these systems never catched on, they are unnecessary.
You may say, but "non-programmers" will use it! No, they won't. Designers will use real design tools to create (non-functional) visual designs. Programmers will bring those visual designs to functionality. That procedure works. It'll keep working. These systems are diversions, not improvements. Worthy of investigation, but not practical.
Yo - I did the Airbnb project linked ^^^ which I believe was the first of this wave of deep learning-powered sketch->UI projects (though standing on the shoulders of decades of R&D projects)
Our take was that we really do design on paper or whiteboard first & foremost, which is why our project emphasized the webcam + sharpie thing rather than drawing in-browser etc.
Here's a related thing I wrote about the need for design tools to design the real thing, rather than facsimiles of the thing: https://jon.gold/2017/08/dragging-rectangles/ - so so so much process waste is because developers have to re-implement static pictures of designs.
In our case, we didn't get buy-in to keep developing the project, but I'm kinda jazzed that so many people are running with the idea
Your solution covers only the "semantic" part. Just look at the data. It's basically a simple component tree. It would be more efficient to just type it up. It also would be more efficient to just drag rectangles from a toolshelf instead of defining the type of the rectangle by drawing extra hints.
As for the visual design, that's where you use a design tool like Illustrator or Photoshop. Typing that up (e.g. in CSS) is surely a pain, but sketching it all up is out of the question. I certainly do see room for improvement in the workflow here, but a sketchy interface isn't helping.
You have to question a lot of assumptions here, but also consider how designers are most efficient with the tools they already know and have used for years. Don't mistake something that you want to create for something that users will actually want to use.
Hey ! I'm a big fan. Obviously this project as it is constructed currently isn't much. But the idea that machines can learn the code behind what artists draw is especially intriguing. I also think we'll get there is due time, with the great work being done on GANS and better scene understanding algorithms coming out. I was inspired by your team's idea, even won a Hackathon with this idea ! Thanks for your contribution, I'm a big fan
Journey back in time with me to 1963, when that Sketchpad software to which I linked was unveiled. The same criticisms apply:
"The things this demo shows could have been typed up in text in seconds. Designers will use real design tools to create (rough) visual designs. Engineers will bring those visual designs to blueprints."
And yet half a century later, CAD is firmly in the domain of visual designers, where it seems so obvious that you would have to be crazy to think people would be designing in code. But hindsight is 20/20!
The way forward to visual programming might not be super clear, but we'll get there. If you don't think text-based REPL-style programming is limiting, I encourage you to check out Bret Victor's explorations of abstraction and direct manipulation. http://worrydream.com/
What a poor comparison. Have you actually used any CAD software? None of it works like Sketchpad. Design software in general doesn't use shape recognition, that's a pointless gimmick.
> The way forward to visual programming might not be super clear, but we'll get there.
This isn't even visual programming, nor is it a step in the right direction. My text editor has all kinds of visual tools. The data I edit however is textual, which has a lot of benefits.
> If you don't think text-based REPL-style programming is limiting...
I don't think REPLs are very useful for programming either.
> ...I encourage you to check out Bret Victor's explorations of abstraction and direct manipulation.
I'm aware of this stuff, it looks nice, but I don't think you need an entire visual programming language to get that benefit. If I need visualization, there are lots of tools to use.
Airplanes were once worthy of investigation but practical. As were automobiles. Generally, the fact that something works isn't a compelling argument that something else won't succeed.
I do aggree that it's pretty high bar in this case though - it's changing the flow, not just improving it. So it'd have to get very polished to be able to compete, which I just don't think it will.
> I do aggree that it's pretty high bar in this case though - it's changing the flow, not just improving it.
My whole point is that this is not an improvement, it's actually a worse way to enter a simple datastructure into the computer. It's even worse than using the already established UI paradigm of programs like Paint. Picking a tool and dragging out a box is faster, because you don't have to learn the visual language of how to draw these widgets.
It does look cool, because it makes the computer appear smart, but it's just not a good interface for actual use.
You may say, "but 'non-typesetters' will use it!" No, they won't. Authors will use real writing tools to create (non-legible) articles. Type-setters will bring those articles to print. That procedure works. It'll keep working. These systems are diversions, not improvements. Worthy of investigation, but not practical.
I worked on state of the art CASE software decades ago that would allegedly let users graphically model then generate their own systems without writing any code.
Yet looking around I see plenty of people still building systems by writing code.
Maybe one day the surf will come in for these ideas.
If you already have a consistent design language,and those pre-baked pieces, it seems like assembling them visually is the easy part. This would still leave the behavior logic an data flows.
I’d be interested to see new things like this though!
As one example, you might use a tablet and tablet pen to draw up a few alternative sketches for the same UI, each of which might have tens of UI elements. With that input mechanism, it seems a bit tedious to have to tell the computer "this is a text box" for each text box, especially when a human (or smart algorithm) looking at the drawing can see that pretty clearly.
this is pretty cool!
one thing on the image drawing one is that there are "industry-wide" common practices when using a placeholder/drawing one to represent an image as a square/rectangle crossed from each angle (if I explain myself correctly).
this dates back to the print design days, but would be a shape that's a lot easier to draw than the pseudo picture that's common in OS UI's.
Yotako, which has been linked somewhere in the thread by tyingq (which reminds me of the Palm Pilot days) uses a similar approach as well. It makes it easier to draw and faster too
Neat, but wouldn't it just be easier to have a pallet of controls / components that user could drag into a view and cut out the need for interpreting a drawing?
I still create my UI’s with a sketchbook and pen / pencil. For me there’s nothing that matches it yet. Even though there are a bunch of digital product design tools nowadays, I still tend to begin with my sketchbook.
It’s fast and expressive, and always there and always on, just a single tool / interface (pen to paper), which is a huge advantage when just trying to get concepts down visually. The clincher in this decade though, is the necessity to think responsively while sketching, understanding that there are all sorts of device sizes now. Then either mock it up in a design tool later or straight code it up once I have the general concepts down. I’ve tried all sorts of things (was really disappointed when SubForm shut down, that one was kind of interesting). From concept to product, starting off with paper and pen is still the quickest route for me.
I think this is great. I see a future where designers can draw and create an interactive prototype. Anything beyond that is, my (educated) guess, long way off. Anybody hoping this will remove the need for designers or front end devs will be disappointed.
I'm not disappointed because I don't think removing the benefits of thought and effort by people who have studied and understand user interface design, interaction, usability, accessibility, Fitts' Law, etc, is such a great idea or good for users.
Drawing where to put the widgets (and not using constraints or grids or automatic layout or adaptive rules or responsive design, or user testing and performance measurement and empirical evaluation) isn't the hard or important part of user interface design.
Who is supposed to benefit from this? A company who refuses to hire a competent user interface designer and wants to crank something out really quick regardless of quality? Users spend much more time using an interface than you spend designing and implementing it, so optimizing the time and amount of mental effort you have to put into making a user interface isn't worth it if it doesn't result in a better, easier to use interface.
I imagine a WordPress-like site, where you draw widgets and bring it to life with a theme. Maybe nothing too rocket sciency, but I reckon there's a market for it.
>I think this is great. I see a future where designers can draw and create an interactive prototype.
We are already there with Framer X and competitors pending launch in the near future. There is a learning curve that most designers are not super comfortable with yet, but I expect that will improve quickly. We also are limited currently to React for Framer X but I think opening it up to other front-end frameworks is on the horizon. Exciting times!
Regardless, it's a cool idea that made me laugh a bit at first (seems almost absurd at first glance) but then got me thinking about possibilities. Good job!
It would be great if you could get it in a state where people could really try it out (even in an unpolished state), either locally or ideally with a web-based demo.
Hello, thank you for your comment. I have updated the code to a somewhat polished state with instructions on how to run and build the project. I hope this is fine. Please tell me if anything else is required of me in the issues section of the repo!
Too bad nowadays frontend dev is more about state management than layout and boxes. Designers have tons of tools turning mockups to interactable prototype. Non-technical people have drag and drop tools.
its a great concept & prototype - really reminds me of the palm pilot graffiti writing system and alphabet [0].
after just watching the video - seems like adding a plugin system for targeting various UI library (ex bootstrap) would be really cool (I didn't read all of the text - maybe he suggested or already has this..)
I feel people who are only willing to point out negatives (without any appreciation for good parts) should refrain from giving feedback. Frankly speaking, such feedback isn't very motivating.
Nor is it actionable. Exists solely for the "critic" to signal how unimpressed they are, as if seeing the defects in something is some challenging feat.
We're in this obnoxious age of confusing useful criticism with any reaction one can come up with on the fly, no matter how superficial. Like it's their destiny to weigh in on something as rapidly as they can, and as if they're doing some critical service for the universe.
The comment above exemplifies this when they say "what, we're supposed to pat them on the head and say attaboy?" No, the problem is that you think you need to fire off some undigested response at all. If you have nothing meaningful to say, then just say nothing. It's okay.
What are people's thoughts on when this will actually be usable? UI eng is just another labor intensive automatable step for creatives. Next will be to automate the design part too. Don't draw your UI, generate it from user stories.