Hacker News new | past | comments | ask | show | jobs | submit login
Sketching Interfaces (airbnb.design)
158 points by adriand on Nov 3, 2017 | hide | past | favorite | 31 comments



Some similar experiments, dating back as far as 1996:

SILK: https://www.youtube.com/watch?v=VLQcW6SpJ88

Denim: https://www.youtube.com/watch?v=tCVYKgewDXc


Thanks for the links


This is really incredible.

I've always loved the idea of "box and arrow" programming for flows, but in practice the interface was always too cumbersome -- typing was faster.

But coming up with a design language that a human can sketch and a machine can interpret really breathes new life into that idea.

Love it!


Seeing things like this just make me want to get into ML. I'm sure there's a ton of complexity behind it but it seems that it just feels so rewarding.


Just know that ML jobs are just statisticians with sexier names


Depends 100% on where you are. Data Science/ML engineers can mean dramatically different things at different companies.


And IMHO statistics is the all important base one needs to have to wield ML properly. For starters, one should know the difference between the using the OLS algorithm and Simple Linear Regression.


I'd be inclined to agree with Adam Michela when he says design systems restrain creativity. It is the industrialization of user interfaces.


Roads certainly limit where one can drive, but the alternative is pretty disastrous.

As with anything: there is a time and place. Design systems are wildly powerful for scaling, creating a cohesive language across a brand or product, and moving quickly.

Just because you have a design system doesn't mean the buck stops there. You can keep building new roads if you know where it is you need to go that the existing roads won't take you.


What they are demonstrating is clearly for experimenting layouts and screens with their own predefined elements. With broken up elements and basic design kits like paper CSS, this would be a very useful tool for quick prototyping.


At some point unselfconscious design cultures must give up some of their innocence if they want to mature into self conscious design cultures/systems[1].

1: “Notes on the Synthesis of Form” by Christopher Alexander.


Software development is applied semiotics.

I’ve been saying this for a while but software development hasn’t ever really felt like symbol making (rather some engineering type profession) so people dismiss this notion. I’m beginning to feel vindicated.


Correct me if I'm wrong but is this basically doing what Adobe & others have already done? The only difference I noticed is they're using a camera & image recognition as the input instead of the designer dragging & dropping 1 of the 150 components on to a digital canvas in a software application?

I do get that it's a demo that shows the possibility. I just feel I'm missing something that makes this stand out. I did only watch the first video & skim the article.


In practice there's a big difference in thought process and workflow between drawing freehand on paper and flipping through a UI library with a drag and drop interface. I think this is true even if your task is just composing from the same well defined set of components.

If they can actually pull this off it I can see it being very useful. And if it works really well in conjunction with other experiments currently ongoing, I can imagine it even affecting the way teams are structured and how a designer's day to day work goes.


The recognized seems to be fixed to a limited UI language, so is it fair to call this freehand? Rather, there is palette that you access via roughly sketching out a predefined form.

This won’t be that useful for many designer sketches,l where the visual language is mostly undefined.


A perfect example of several congruent technologies being combined in novel ways to create something that opens up entirely new ways of working. Design systems + ML computer vision. Incredible.


Looking at the embedded youtube video the article might have been published after Sep 7, 2017. I can only guess, though, since they don't include dates on any articles they publish.


This is insanely great. An ideal system would be developing something like this to do PSD to HTML, instead of having it identify predefined components.


If I'd want to build something like this as a copy-paste nerd with some python experience, where would i start?


From my 5 minutes understanding of the problem: Non deep learning solution

Step 0: you should have some training data. Draw it < I believe 50 of each class you want to detect should be enough to start, but you can think about doubling it>

First you need look at the opencv library, specifically in the findcontours function https://docs.opencv.org/2.4/modules/imgproc/doc/structural_a... - I'm linking to the c++ api but the python has the exact same function.

Make sure your contours are clean of rubbish data because this can affect the outcome of the classifier.

With the extracted attributes of each individual class you pass on this data to a multi class classifier (http://scikit-learn.org/stable/modules/multiclass.html)

Parameterize your model, Split your data into training and testing ... yada yada ...

With the generated model, you can receive individual detected contours and it will give you the fitting probability to each class, use it to feed an api which provides data to a react component that renders html.

>> ps: Ah you need an image capture app which reads the image in real time, pass the filters and detect contours to then pass it to the classifier.

>> ps2: I'm available for projects :P


How much time should this take to build? I'm really interested in trying out to build this. or maybe creating an open source package?


I suspect someone with enough experience can hack this in a couple of days.


How can I contact you?


Are you interested in building this? I'm interested too if you're looking for collaborators!


Yes!


I updated my profile with contact details.


First of all thanks a ton! I managed to get started on the detect edges part with wolfram programming cloud. I'll have to do some reading on the classifier, thanks for laying it out like this.


For anyone reading this: - appearantly this is done unsing http://ml4a.github.io/guides/DoodleClassifier/

- also www.computervision.ai enabled me to get started with images files pretty fast, drag and drop to train a classifier


The video demo showing their sketches created into a high fidelity design was really neat


It's great!


Very interesting concept. It reminds me of the discussion I've seen around smalltalk, in which smalltalk fans say that dev environments should be live and should immediately react to changes. The idea of being able to sketch something up on a whiteboard and having the UI come to life on a screen next to me instantaneously really feels similar.

On the other hand, who is the guy doing the talk at the bottom of the page? That was totally cringy. I felt like he didn't know what he was talking about.

Edit: wording. tone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: