And IMHO statistics is the all important base one needs to have to wield ML properly. For starters, one should know the difference between the using the OLS algorithm and Simple Linear Regression.
Roads certainly limit where one can drive, but the alternative is pretty disastrous.
As with anything: there is a time and place. Design systems are wildly powerful for scaling, creating a cohesive language across a brand or product, and moving quickly.
Just because you have a design system doesn't mean the buck stops there. You can keep building new roads if you know where it is you need to go that the existing roads won't take you.
What they are demonstrating is clearly for experimenting layouts and screens with their own predefined elements. With broken up elements and basic design kits like paper CSS, this would be a very useful tool for quick prototyping.
At some point unselfconscious design cultures must give up some of their innocence if they want to mature into self conscious design cultures/systems[1].
1: “Notes on the Synthesis of Form” by Christopher Alexander.
I’ve been saying this for a while but software development hasn’t ever really felt like symbol making (rather some engineering type profession) so people dismiss this notion. I’m beginning to feel vindicated.
Correct me if I'm wrong but is this basically doing what Adobe & others have already done? The only difference I noticed is they're using a camera & image recognition as the input instead of the designer dragging & dropping 1 of the 150 components on to a digital canvas in a software application?
I do get that it's a demo that shows the possibility. I just feel I'm missing something that makes this stand out. I did only watch the first video & skim the article.
In practice there's a big difference in thought process and workflow between drawing freehand on paper and flipping through a UI library with a drag and drop interface. I think this is true even if your task is just composing from the same well defined set of components.
If they can actually pull this off it I can see it being very useful. And if it works really well in conjunction with other experiments currently ongoing, I can imagine it even affecting the way teams are structured and how a designer's day to day work goes.
The recognized seems to be fixed to a limited UI language, so is it fair to call this freehand? Rather, there is palette that you access via roughly sketching out a predefined form.
This won’t be that useful for many designer sketches,l where the visual language is mostly undefined.
A perfect example of several congruent technologies being combined in novel ways to create something that opens up entirely new ways of working. Design systems + ML computer vision. Incredible.
Looking at the embedded youtube video the article might have been published after Sep 7, 2017. I can only guess, though, since they don't include dates on any articles they publish.
This is insanely great. An ideal system would be developing something like this to do PSD to HTML, instead of having it identify predefined components.
From my 5 minutes understanding of the problem: Non deep learning solution
Step 0: you should have some training data. Draw it < I believe 50 of each class you want to detect should be enough to start, but you can think about doubling it>
Parameterize your model, Split your data into training and testing ... yada yada ...
With the generated model, you can receive individual detected contours and it will give you the fitting probability to each class, use it to feed an api which provides data to a react component that renders html.
>> ps: Ah you need an image capture app which reads the image in real time, pass the filters and detect contours to then pass it to the classifier.
First of all thanks a ton! I managed to get started on the detect edges part with wolfram programming cloud. I'll have to do some reading on the classifier, thanks for laying it out like this.
Very interesting concept. It reminds me of the discussion I've seen around smalltalk, in which smalltalk fans say that dev environments should be live and should immediately react to changes. The idea of being able to sketch something up on a whiteboard and having the UI come to life on a screen next to me instantaneously really feels similar.
On the other hand, who is the guy doing the talk at the bottom of the page? That was totally cringy. I felt like he didn't know what he was talking about.
SILK: https://www.youtube.com/watch?v=VLQcW6SpJ88
Denim: https://www.youtube.com/watch?v=tCVYKgewDXc