> So there is no “training” in the components part at all. It uses pre-defined components that Figma team designed. They made complete apps with designs based on existing apps: weather, fitness, etc. If you ask the AI to create a weather app, it would use the weather app components
Yes, it is what the AI is supposed to do when given a data set that is too small for training and a general creativity-free prompt.
The AI system generated a weather app that looked like Apple's because the Apple weather app's UI is one instance of a bunch of almost indistinguishable weather UIs. if humans don't show any variability or creativity in creating a weather app, why would you expect an AI, trained on human knowledge to do any better?
If AI has no creativity then it shouldn't be able to compete in the slightest with actual humans, having creativity is such a fundamental aspect of creation that you would think that something that doesn't have it shouldn't even be comparable, yet many are worried.
Anyone who cares about public health would be worried that so many people eat slop from McDonald's. But if McDonald's food is bad and unhealthy, it shouldn't be able to compete in the slightest with real food.
Food from McDonald's is as real as healthy food, it's food, there's nothing fundamentally different, just lower quality, if that's the point, then you're saying there's also nothing fundamentally different from stuff created by humans or by an AI, just lower quality.
We'd be a healthier society if the early days of ultra processed food had been met with skepticism instead of prostration to corporations changing the nature of food for profit.
A human can be less or more creative, there is always some creativity, but for a machine it is controversial to say if there is any creativity at all. Going from not very creative to very creative seems to be a much easier task than creating a machine with some creativity to begin with.
I think there is creativity as a process (which is not what I think the machines do) and creativity in the output as a perception/judgement on a work. What I meant is that I see machines as emulating human creativity, as in producing something that has some of the characteristics of work that is usually seen as creative. The quality of creativity, in this sense, is to be seen in the output, not the process. AI is trained on human (creative) output, thus results of it can seem "creative" because of that. AI does not transform multisensory inputs and lacks any sort of experience or coupling with the world. It cannot be really creative not (just) because we have not figured the right algorithm, but also because it is not transformative enough. But the emulation itself can produce outputs where humans will see as containing creativity.
Maybe the conclusion is that we do not really need creativity for some tasks, what we need is certain aesthetic and other rules that the AI can learn through is training. If that makes a more or less boring future where everything is the same and nothing really changes is left to be seen.
I disagree, I don't see how a model that is trained with multimodality (input and output) and that can explore the environment will acquire creativity, I can see being more creative, but that suddenly these specific characteristics should make the model creative instead of emulating seems odd. I also disagree with "not transformative enough", they are very transformative.
I never said that a model "trained with multimodality" will acquire creativity just because of that.
How transformative they are is probably a subjective judgement depending on what one means. For example, the way I see it, when we do poetry, we transform our experience into words. What speaks to people through poetry is not the words themselves, but the pointers to the connected experience they contain. Poetry is not about writing a nice poem by predicting the next token based on statistical correlations. Current AI is not transformative enough because it does not transform lived experience into words, it predicts tokens based on what tokens it was trained on.
This is likely it. Had it been some small company, they'd probably take the Microsoft AI chief approach and claim it's fair use, open-access internet, etc. Apple OTOH are litigious.
Information about our world is fundamental to create something recognizable for humans, otherwise it would be just noise, that doesn't mean you just copy the interface of the Apple Meteo app.
A skeptic in me says who cares, this is just making it easier to do what app developers are already doing. The amount of copycat apps in every app store is pretty ridiculous.
The idealist version of me thinks how incredibly sad that a major tool provider thinks this is the future of design tools.
For quite a few in the industry, having to design a UI is a chore, not a core competency. They want to be able to go from "Here's a description of what my app does" to how humans use it and skip the work of having to lay out a usable 2D visual interface space efficiently and cleanly.
Consider it analogous to compilers; people want to just make the program, they don't want to translate the program to the actual symbols the machine can act upon by hand. This tech is that line of thinking applied in the other direction.
Well, that's why there are people in another industry, whose core competency is designing UI - but writing the code behind their UI is a chore for them.
Why don't these few in the first industry cooperate with people from the second industry? It will be in the best interest of the end users of those software systems
As always, it depends on whether a machine can automate the task.
There are people who's core competency is chip layout also; they are rapidly being displaced by automated chip layout heuristic engines, which can do it way faster and (often) better on chips far more complicated than what a human can practically do.
Didn't they train the initial version of this AI app on the design system that Apple made exclusively for Figma? If so, can't it be argued that the AI is just doing its job based on the limited learning set it's been trained with?
The designs produced by the tool are indistinguishable from what you’d get if you told a designer you wanted the Apple weather app with slightly different colors and font sizes: https://x.com/asallen/status/1807675146020454808
“Within hours of seeing this tweet, we identified the issue, which was related to the underlying design systems that were created. Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline for Config,” Figma CEO Dylan Field said on Twitter.
Shocker, another CEO pushing for GenAI with little to no understanding of what the technology actually is.
CEO's and upper management have long pushed for things they don't understand (see Crypto). But at least in the past the worst case scenario was wasted money and a failed initiative.
Now it is putting out 'tools' that are borderline dangerous (See Google) and/or poorly conceived and infringes on other people's work.
That's exactly how I read it. Sounds like they have a set of underlying templates to train from, and one of those must have been some type of Apple Weather app recreation.
My worry is that going down the rabbit hole of what is plagiarism and copyright is not productive. Humans are inherently rip off engines by this definition: everything we create is some remixed version of acquired knowledge created by someone else. Where do you draw the line? How much novelty must there be? Can you police this remixing at scale?
Tough questions when a machine can create novels in seconds that are as good as human written novels over many years. Value of knowledge is about to plummet.
> Where do you draw the line? How much novelty must there be? ...
> Tough questions when a machine can create novels in seconds that are as good as human written novels over many years. Value of knowledge is about to plummet.
You draw the line in a different way, different regimes for humans and machines: a friendly one for human creativity, a prejudicial one for machine "creativity". Sort of like how it's murder to physically crush and dispose of one of your human employees, but it's fine to crush and dispose of an old computer or car.
It's a probably pipe dream, because our society is cruel and indifferent, but it's the way it should be if you actually care more about people than systems.
Maybe we should start talking in terms of scale then. Yes, we humans are inherent rip off machines, melding that together with our conscious experiences is where creativity is born. We aren't capable of ripping off at industrial scales, we get inspired by other work, a few people we look up to, and over time we develop our sense of taste and use our creativity to transform all these experiences into something new.
A rip off industrial machine ingesting all creation to spit out something else is not something we've seen or dealt with before, if we ignore the scale aspect of it we are very ill-equipped to deal with the consequences.
Scale always matter. A pretty good example of that is social media, we always had the village cranks, the conspiracy theorists, etc. but given the scale social media amplifies that we are now dealing with issues never seen before.
Humans have the capability to draw upon their influences to re-interpret and innovate in ways that lead to a new, unique interpretation, moving standards forward. AI always mimics, nothing more.
Eh, the rabbit hole already lead to you being in the wrong when copy other humans. Why would it be any different if you as a human get an AI to copy another human compared to if you as a human use a paintbrush to copy another human?
> lead to you being in the wrong when copy other humans
How so?
> Why would it be any different if you as a human get an AI to copy another human compared to if you as a human use a paintbrush to copy another human?
No difference indeed. AI is just a tool, like Photoshop or a paintbrush. Unfortunately it's practically impossible to argue in court when two works are "substantially similar" at scale. This happens for extremely high profile cases atm, most artists are copied and never seen a dime, because the definition of "substantially similar" isn't black and white.
This seems to be happening a lot with these things. "Here's an amazing new thing... oops, sorry, actually, it's terrible, let us remove that." This, the bonkers Microsoft screenshot-everything feature, various Google things...
We've been listening to AI idea guys and shovel sellers drone on about how "AI is making x job obsolete" on podcasts and Twitter and Discord, and shouting down anybody daring to claim that real human work can't be replaced that easily.
Glad that we've finally reached the point where the rubber meets the road, and AI products have to prove themselves as fit for purpose, instead of just preening for more investor $$$.
I imagine the engineers are aware of the issues and know will be DOA, but that high up they feel the pressure to cash in on AI or risk falling behind, and they find it better to launch and remove an AI feature (to show they are actively working on AI features), than to not launching anything at all.
E: Reminded of an anecdote from when I studied biomedical engineering. The professor told us about an endoscope system that worked better than the industry standard. The company was so excited about this that they went to a medical conference and told a room of doctors that it was so easy, a nurse could do it. The company went out of business.
This can be said about every AI system/software and not just Figma. First the data is gathered for "self-supervised" training. Then, some product is built on top of it to gather users. Once the users show up their data is in turn used to fine tune the system in order to continue gathering data and subscriptions from the users.
The logic of AI companies is very simple and the entire value proposition is in how efficiently the company can convert user data/feedback into features that users will pay for so that the AI company can continue paying their cloud bills.
A blog post of a tweet that doesn't include the image so you have to click the tweet link to see the comparison. The tweet should have been linked and not the pointless blog post.
I think some extra context the Tweet doesn't provide which the article does is pointing to what the author and an article they link to see as the uniquely confusing temperature bars of Apple's Weather app, which the Figma generations copy.
I had an interesting thought: if coding AI gets good enough will it eventually be possible to pirate SaaS?
Probably not extremely complex deep SaaS but about 80% of it is just a UI in front of a database and some associated services more or less. A very good coding AI could probably clone the UI and the database at least by setting a bot loose on the system to learn.
Not sure you’d even call it piracy except maybe in spirit. I suppose their ToS could forbid it.
Plus the AI could understand the business logic from the buttons and what they say. If it says "Ban user" or "Play", it's pretty clear what is supposed to happen. You just have to show it all interfaces..
This has been the dream ever since the days of the Tom Swift novels.
A machine that turns a loose description of the sort of application I want into a functioning architecture with proper user interface and storage? Yes. Let's build that.
If coding actually became this good, what would be the point of "pirating" it? Why not request that an app be made tailored specifically to your user interests and no one else's?
"Technofeudalism" sounds more and more prescient, GenAI like what we are seeing adds more fuel to the argument that we are working, for free, for the tech behemoths vacuuming data.
It's strange to think a book just released last year already needs an update, where Yanis Varoufakis only considered us working for free in these technofeuds by providing behavioural data (what do we click, what do we buy, etc.) the GenAI bullshit now has upped it a notch to include all creative work done as free work for tech companies.
It's sad to think that only cases like Figma, where they step on the toes of other giants with deep pockets, might actually bring some change to regulations on companies profiting from the work of others without compensation.
I believe there could be a whole lot more useful things in AI to be done for the amount of resources being spent on training GenAI. It's a neat tool being completely misguided to create neat party tricks...
I've been using LLMs a lot to guide me into studying topics I know very little about and Google searches lead me into spam-filled garbage, I love to use them for this task, and hope to see many improvements on this direction. There's so much more useful stuff to explore instead of spitting transformed copies of the ingested training data.
Shit like this [0] boils my blood, the absolute hubris.
The world began with some people's memories. Displaying weather data has been largely consistent across the past six or seven decades, which apps have mirrored.
If the author had made an attempt to explore the issue beyond a single point of reference, or had any other experience to draw from, there's no reason they wouldn't have mentioned it for comparison unless they knew it would have weakened their argument.
Gruber literally downplayed the LassPass scam app by saying it wouldn't rip you off as much as other scam apps might. I wish I was exaggerating:
> Instead, the scam LassPass app tries to steer you to creating a “pro” account subscription for $2/month, $10/year, or a $50 lifetime purchase. Those are actually low prices for a scam app — a lot of scammy apps try to charge like $10/week.
(emphasis mine)
I have seen yoga practitioners less flexible than the contortions Gruber can get himself into to downplay Apple fuck-ups.
Lucky victims, I guess?
hey could have been scammed for more?
He also outright claims, without zero evidence whatsoever, that to his eyes, "it doesn't look like this was made to steal LastPass credentials".
Well thanks for your insight, John.
Very much a "yeah maybe it sucks and shouldn't happen, but this is no big deal, really, so why are you getting all up in Apple's face about it?" vibe.
Now that the Adobe merger is off, we're hearing from investors and partners that an exit is desired. Generative AI is the way the company does this, by transforming the company into a lean, mean shareholder-value-driving-machine.
What a nothing-burger, Figma says they didn't do any custom training or fine tuning and they think the look a like problem is due to a design system they commissioned. They don't explicitly say the design system was created by a human but odds are... Also, the thing that was "ripped off" is the weather app. Oh, you mean the app that looks nearly identical on every platform and website ever? Wow, I'm shocked.
Well, it seems to look at UI, and not at functionality. While Apple's one is beautiful it has only gone backwards in functionality and quality, both on iOS and Apple Watch.
For example, below the Daily Forcase you have these blocks which can be 1x1 or 2x1 or 2x2 sizedd. Problem is that sometimes some of them aren't there, depending on the location you're looking at. While a whole line disappearing is fine, a 1x1 disappearcing causes the one on the right of it to jump to the left, which makes it super hard to find.
On Apple Watch they switch to this circular UI, where the current hour is highlighted, and then going around it shows the temperature. Except it's now limited to 12 hours. I wake up at 6AM (Phoenix), and I would like to see the forecast for tonight 8PM to see if we're going to be able to cook steaks on the BBQ. No can do.
Not to mention that I need to look at the screen and check where the current hour is (I'm no longer adept at looking at an analog clock, I don't have them in my life anymore).
Lastly, Apple Weather offline is horrible. On iOS it shows nothing. Is it that bad to show outdated weather?
Or, you look at your watch, and the shortcut shows X, you tap it to open the weather app. Weather app shows shows Y, count to 10 and it shows Z...
I miss Dark Sky. That one could tell me that my neighbor was getting .1" of rain and I was getting .2". It was precise. It worked. I don't know why Apple bought them.
Penny Arcade[1] had a rant about this very subject this morning:
> The big players in this space all happen believe things about other people's intellectual property that are orthogonal to human flourishing. It appears to be endemic in their spaces. Other times, we don't have to work so hard. For example: Perplexity literally duplicates other people's work on its own site. Then, it will generate a podcast based on the uncredited work. They want the same thing as Google's Gemini, in that you'll come to it for a search experience that's owned end to end - powered by your own uncredited work.
Gen AI models have also been used to appropriate an artist's distinctive art style[2] in a way that pushes up against the edge of copyright. You've all heard about OpenAI and Scarlett Johansson[3]. This kind of stuff makes the industry look shady.
In theory existing copyright law should cover these new AI cases, but if you use something like Figma AI and it "rips off" (as John puts it) an existing app, you might not even realize that you're copying someone else's design because there's no provenance. That makes it harder to follow the law.
>> The big players in this space all happen believe things about other people's intellectual property that are orthogonal to human flourishing. It appears to be endemic in their spaces.
They believe things that are contrary to human flourishing on a more fundamental level than their attitudes towards other's intellectual property. They believe in technopoly [1], where all things are subservient to the technology and must change to serve it, no matter the consequences to people. You see this in their attitudes toward artists and their IP, and even in the kind of future employment they envision: where machines do the enjoyable creative work and humans do mind-numbing, inhuman monitoring and review tasks [2].
This is one of those instances where all the people complaining are entirely correct but my forecast is they're still going to lose the war because they are up against economics. They have the same problem that the anti-torrent crowd has - the technology exists and is cheap to use. There is no longer any economic value in having a distinct style, and there might not even be any way to protect it. Any teenager with no training or talent can copy a style with the tools that now exist.
Hopefully most artists don't make money from their specific style anyway. I doubt xkcd.com is panicking because other people might start drawing stick figure comics.
> In theory existing copyright law should cover these new AI cases, but if you use something like Figma AI and it "rips off" (as John puts it) an existing app, you might not even realize that you're copying someone else's design because there's no provenance.
This specific case doesn't seem like a problem. All the OSS software I'm using seems to have copied design elements wholesale from existing software projects and corporations shamelessly copy off each other.
Correct. As much as Mr. Gruber may be irritated / frustrated that Figma AI's engine just apes Apple design patterns when asked to build a novel interface, how much legal protection do we really want to give button, switch, and gauge patterns?
Think about how wildly different Photoshop and Gimp's UIs are. Do we want a world where every application has to be that different from its neighbors because the government has granted strong legal protections for interface design IP? Remember what Nintendo owning an exclusive patent on the crosspad did to every other game console's crosspad for several decades? To give an analogy: I think people generally consider the "APIs are copyrightable" decision to be a bit bullshit... Who wants stronger protections for human interface designs than we already have?
Yeah I agree that generic UI elements should not be protected by IP law. What‘s baffling to me is that the design that Figma AI produced is so clearly derivative in layout and graphical style choices. There absolutely is more than one good way to design a weather app. The thought that many companies might soon use Figma AI as their starting points for their designs is kind of depressing to me, seeing as there is already so little originality present in current UI/Visual design anyways. Things might get a lot more generic and uninspired quickly.
Not defending typing in an (living) artist's name in a gen AI tool, but the weak spot in your argument is your example of "appropriate an artist's distinct style" - her work looks very similar to and clearly inspired by others.
I think Figma is one of the few companies where generative AI makes sense, and that's also why they run into problems - because it's actually useful. All the other companies trying to force AI into their products will not get this heat since no-one is using their AI features.
If we didn't lean our lesson when Google's AI search results were laughed at, I am not convinced we are ever going too.
I mean I guess this one is far more, "get in legal trouble with this huge company", territory. But still. There have been enough AI flops, poorly conceived, and dangerous things over the last year... feels like no one with any decision making power cares anymore.