Hacker Newsnew | past | comments | ask | show | jobs | submit | wizzledonker's commentslogin

Only if they work in a fundamentally different manner. We can't solve that problem the way we are building LLMs now.


Definitely a good one - probably one of the best CLAUDE.md files you can put in any repository if you care about your project at all.


After a cursory glance at commit messages, looks like most of it.

Like all AI co-authored code it’s a matter of time before this becomes unmaintainable and abandoned.


I'm deeply fascinated that people can make this work at all. Just yesterday I had a fight with an AI regarding some Python code, the stupid thing wouldn't admit it didn't know how to use something and just kept spitting out the same lines of broken code. Then here is someone writing an entire functional Teams client with Claude.


> Then here is someone writing an entire functional Teams client with Claude.

According to the README it’s just a wrapper of the web version, with some additional stuff on top.


Usually in those scenarios you stop and do the research yourself and correct it. You don't keep asking it. Also starting new conversations often help.


If I'm doing the research and needs to correct the AI, if faster for just write the code myself. My experience is also that even if I can point to where the errors are, and explain the actual API, I frequently still get the same, broken, result anyway.


The reason most creative media is good is because you see the vision of a creative team or individual.

If the vision is diluted due to lack of control afforded by AI tools, then the tools won’t be used.

Many times in Hollywood have we seen directors spend unjustifiable amounts of money in the pursuit of creative control.

Hand camera tracking a dinosaur in Jurassic Park, developing a novel diffraction algorithm for THE ABYSS, hand-drawing 3-Dimensional computer animations for 2001, creating an entire scale model practically for a single fight scene in LOTR.

AI allows you to get anything. The best movies are a direct reflection of a particular vision. AI can’t provide this and I see no way to solve it.

A natural response is - well directors already outsource some creative control to VFX artists so why not to a machine instead.

Because an artist can control everything. Even if the artist is prompting a model, at the end of the day an artist can drill right down to the tooling itself (photoshop for example) and exactly achieve the vision.

I don’t see AI achieving this granularity while maintaining its utility. It’s a sliding scale of trading utility as a time saving device for control.

If you lean too far to the control side, well you might as well fire up photoshop. If you lean too much to the utility side, you sacrifice creative control.

When looked at under this lens the utility of AI generation is actually limited as it solves a non existent problem. One can think of it as an additional piece of tooling for use only as a generational tool where there is less need for control, such as for background characters.

The team at Red Barrels, for example, train a local model on their own artwork to automatically generate variant textures for map generation. Things such as this. No need to be doom and gloom about this stuff.


> lack of control afforded by AI

You should look at ComfyUI.

Control is here, it's just not widely distributed or easy to use.

If you're patient, you can fully control the set, blocking, angles. You can position your characters, relight them, precisely control props, etc. You have unlimited control over everything. It's just a mess right now.


If you watch a little further until about 20 minutes what follows is an explanation of what the primaries represent (described by you as “colorspace coordinates”) along with a reasonable simplification of what a transfer function is, describing it as part of the colorspace. I believe that’s reasonable? He merely explains Rec. 2100 as if using the PQ transfer function is innate. Definitely all seems appropriate and well presented for the target audience.


I wasn’t able to resume watching, but if he never describes HLG I would call that a miss for his stated goal.

I don’t want to criticize too much, though. Like I said I’ve only watched 15 minutes, and IIRC this is also the guy who convinced a lot of cinematographers that digital was finally good enough.


The author has a FAQ related to the video and he expands on "Why don’t you mention HLG in the demo" in it https://www.yedlin.net/HDR_Demo_FAQ.html


I’ve seen skeuomorphic designs done with vector art, surely this can’t be the only/real reason.


like most things it was probably a combination of things:

marketing (big new design), design trend catch-up (metro, android), and all those other technical reasons (memory, textures, vector graphics, enables easy dark-mode) etc etc

just my guess, but making a dark mode (more easily) possible must have been a large factor too


Just because I’m curious, why the issue with reading GPL code? My understanding is that you would have to essentially directly copy and paste the code for the GPL license to apply to it.


Then I’d say just pick the one you are most familiar with


None of the above, not really. That's why I asked OP what got him settled on Tauri.


This is a fantastic tool and I recommend it! I use it every day to recursively solve bottlenecks in our code base.


It's genuinely not that bad


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: