The next generation will include another processor to offload the inference from the RISC V processors used to offload inference from the host machine.
The blog claims that @pool "solves memory lifetimes with scopes" yet it looks like a classic region/arena allocator that frees everything at the end of a lexical block… a technique that’s been around for decades.
Where do affine or linear guarantees come in?
From the examples I don’t see any restrictions on aliasing or on moving data between pools, so how are use‑after‑free bugs prevented once a pointer escapes its region?
And the line about having "solved memory management" for total functions::: bravo indeed…
Could you show a non‑trivial case where @pool eliminates a leak that an ordinary arena allocator wouldn’t?
Could you show a non‑trivial case, say, a multithreaded game loop where entities span multiple frames, or a high‑throughput server that streams chunked responses, where @pool prevents leaks that a plain arena allocator would not?
It is unfortunate that the title mentions borrow checking which doesn't actually have anything to do with the idea presented. "Forget RAII" would have made more sense.
This doesn't actually do any compile-time checks (it could, but it doesn't). It will do runtime checks on supported platforms by using page protection features eventually, but that's not really the goal.
The goal is actually extremely simple: make working with temporary data very easy, which is where most memory management messes happen in C.
The main difference between this and a typical arena allocator is the clearly scoped nature of it in the language. Temporary data that is local to the function is allocated in a new @pool scope. Temporary data that is returned to the caller is allocated in the parent @pool scope.
Personally I don't like the precise way this works too much because the decision of whether returned data is temporary or not should be the responsibility of the caller, not the callee. I'm guessing it is possible to set the temp allocator to point to the global allocator to work around this, but the callee will still be grabbing the parent "temp" scope which is just wrong to me.
For memory only, which is one of the simplest kinds of resource. What about file descriptors? Graphics objects? Locks? RAII can keep track of all of those. (So does refcounting, too, but tracing GC usually not.)
You deal with those the same way you deal with them in any language without RAII, some sort of try-with-resource block or defer.
Not making a value judgment if that is better or worse than RAII, just pointing out that resources of different kinds don't have to be handled by the same mechanism. This blog post is about memory management in C3. Other resource management is already handled by defer.
Why would you treat the two differently, though? What benefit does it bring? (defer is such an ugly, manual solution in general; it becomes very cumbersome once you may want to give control of the resource to anyone else.)
Not really obvious, given that there are garbage collected languages with RAII, like D, to quote one example.
And even stuff like try/using/with can be made RAII alike, via static analysis, which defer like approaches usually can't, because it can be any expression type, unlike those other approaches that rely on specific interfaces/magic methods being present, thus can be tracked via the type system.
So it can be turned into a compiler error if such tagged type doesn't escape lexical scope without calling the respective try/using/with on the variable declaration.
Just create dummy wrappers to make a type level distinction.
A Height and a a Width can be two separate types even if they’re only floats basically.
Or another (dummy) example transfer(accountA, accountB). Make two types that wrap the same type but one being a TargetAccount and the other SourceAccount.
Do you really want width and height or do you actually want dimensions or size? Same with transfer, maybe you wanted a transaction that gets executed. Worst case here use a builder with explicit function names.
Sound type systems are equivalent to proof systems.
You can use them to design data structures where their mere eventual existence guarantee the coherence and validity of your program’s state.
The basic example is “Fin n” that carries at compile time the proof that you made the necessary bounds checks at runtime or by construction that you never exceeded some bound.
Some languages allow you to build entire type level state machines! (eg. to represent these transactions and transitions)
My point is a Width type is usually not the sound type you are looking for. What probably wanted was a size type which is width and height. Or a dimensions type which is width and height. The problem was maybe not two arguments being confused but in reality a single thing with two elements…
Well yes. The thing I particularly like about the Dirichlet function is it’s so simple to state and yet just completely breaks my intuition about so many things.
I’ll try to not be dismissive of the labour, though it’s kind of funny (or actually natural) that the heavy lifting libraries that only a few can actually write are open and free, while the shallow wrappers that everyone can write are paid and closed.
Decades ago we were calling out these software and now it’s the norm.
Another example along the line: I wanted to extract a frame from a video on iOS, it’s impossible with the built-in tools (screenshot aside) and found that someone built a paid app only for that.
While i'm with you in principle, over the years i've learned that we should not talk down good UI/UX, if that's what the wrapper adds. It's a crucial component to the value of a piece of software for the end user.
He actually himself writes that he doesn't want to spend too much time on his apps:
> now i have lession that i shouldnt build apps that consumes so much time.
Sounds like somebody really devoted to the perfect UI experience.
Look, I don't want to talk down this kid. Everybody starts somewhere and I like the enthusiasm. But him expecting to make $30 off everybody for plumbing together a bunch of FOSS libs is rubbing me the wrong way.
Yes just like instead of Dropbox for a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
I don't check the project deeply yet so I don't comment about it.
That's not true Dropbox was in the technical edge when it came, and old time ago, was barely possible, and the recurrent cost of the severs and operation cost is clearly not the same.
And pay Foss GUIs are sad state of reality
Distribution, marketing and running it is the hard part, not building software. I consider libraries to be like roads, it's a communal good if you will. Feel free to build and run these apps yourself in the open instead of complaining. You will see how time consuming that is.
Building a road is usually very similar to building another road.
Building ffmpeg is very different from an SSL lib. They need different tradeoffs, design strategies, domain knowledge, etc. And doing them properly is really really hard. A lot of software out there sucks, in part because there is more focus on marketing than on correctness and reliability.
If roads had the same quality as software then traffic deaths would be an order of magnitude higher.
Try working on a library used in tech that your life depends on and you might re-consider your road metaphor.
I've been a core maintainer of several (machine learning) libraries. It's a communal effort of people who care enough about these things. They come together because they need it, it's basic infrastructure so to speak.
> Building a road is usually very similar to building another road.
It's a similar process, but so is building software. That's entirely besides my point though, which you seem to have missed entirely.
Case in point: I regularly use a free iOS app that is clearly the result of someone's deep passion in taking what could have been a simple wrapper and turning it into an incredibly simple but powerful interface to complete a useful task efficiently and at any scale... and that task is exactly what OP was trying to do...
But as someone who's somewhat familiar with app store optimization, I guarantee the creator did none of that that.
Their app name would need to be something obnoxious like "Frame Grabber Extractor: Pic from Video" to capture all the different searches people do for this task.
And the people focused on distribution are even paying for ads with the money they make their IAP infested "1 week trial; $4 a week" alternatives make.
What you mention is the hard part is only because the foundational blocks are free and open source (in this example). Not that marketing would become easy, it's still difficult, but not write-ffmpeg-from-scratch difficult.
What is more, many people expect that software that they are trying will be open source and free. It makes really hard to create a new desktop app that is paid as a solo founder. Congratulations!
its not just shallow wrapper, as a creator of other shallow wrapper(online and free btw) if you want to crop a video how do you plan to do it with ffmpeg cli? it would be really tedious to do so. You can easily do it visually with this wrapper and other such wrappers so its not like they are not providing value.
Another example is do you remember ffmpeg command syntax? i don't! here he is taking care of generating it for you so you don't waste time asking llm or searching for google and iterating on it if it doesn't works.
There's nearly 2 million lines of code in the FFMPEG codebase: unless you're building the next Adobe Premiere, no matter how much value you provide, you are building an extremely shallow wrapper around FFMPEG when you build an interface to crop videos.
No one is saying a shallow wrapper can't provide value, but most of the value for the end user is derived from FFMPEG, not the layer you added to it.
If we took FFMPEG and your wrapper and separated them, FFMPEG could still do the one task that your users need: it would be harder, and it would be less convenient, but it can still crop videos. Your tool would no longer do anything but draw rectangles where we'd like a crop to appear. It'd meet no user needs at all.
-
Also to clarify my stance, there's nothing wrong with shallow wrappers, and I've made shallow wrappers: I know finding the user need, and thinking of the right UX and figuring out distribution is all a lot of real legwork.
But I also find it's important to realize when most of the value you're providing is enabled by something you built on. There shouldn't be shame in admitting that you wrapped something that was powerful and potentially unwieldy for your segment of users and made it useful.
Of course they provide value! They car dealer selling you that new Toyota also provides value. Without him you couldn't buy that car, certainly not so easily.
Doesn't mean he manufactured it, or invented it, or conceived of the very idea of an automobile with an ICE (or EV). It's all a big collaborative effort, and imagining that all of the $40k for that car go straight into the dealer's pocket would be absurd. Legally absurd, and ethically absurd as well.
Similar with a piece of software that builds on other work. Of course it provides value (hopefully). But on the whole, the extra value added is not the majority of the whole package.
That's not what I'm thinking. Where exactly should those $25 go? If there is such an easy place, fine, go ahead. I personally don't see one, since there are many giants on whose shoulders he is standing on.
There are other options, many of them have been mentioned in this discussion.
I suppose my main point is that the apparently new quite inexperienced guy, as appreciated as his enthusiasm is, should at some point understand that all the tech he uses for free didn't fall from the sky and just whipping the cream that floats on top for personal profit is not a sustainable model. Even though that seems to be the trend these days.
The fact of the matter is that there are many motivations for creating software. If someone profits off of my work, that I released into the world with licensing terms that allows them to do so, there’s no obligation that I be paid for it. I could have, myself, recognized the potential and done the work to make a marketable tool, but my motivation was different.
You can fault the FOSS community for promoting default libre licensing that created the “exploitable” nature of this, but the fact of the matter is that people creating software are able to make a choice. They can make a different choice if they wish.
You a seem to be misunderstanding if not misrepresenting my point. I applaud the FOSS community for these licenses. I use them myself. And I don't expect any payment. And it's fine that if anybody invests a lot of expertise and time to build a complicated product to commercialize it. Even better if they contribute back in one way or another.
This is about a green kid coming along and quickly churning out lots of half-baked solutions, asking for frankly quite a lot of money for those, apparently without acknowledging the giants on whose shoulder they are standing. Legally they have the right to do so, sure. But we as a community can give push back in that that's not how things will work in the long run. I encourage you to check out his personal home page, you'll see what I mean. (And again, I generally applaud the enthusiasm. But those things would better be suited as portfolio-building personal OSS projects on github rather than trying to squeeze the dollars.)
There is a difference between legitimate business interests after large investments, and freeloaders.
> But we as a community can give push back in that that's not how things will work in the long run. I encourage you to check out his personal home page, you'll see what I mean.
> (And again, I generally applaud the enthusiasm. But those things would better be suited as portfolio-building personal OSS projects on github rather than trying to squeeze the dollars.)
Why? They are under no obligation to do so, and then they are working for free. (A common trope related to FOSS, often argued on this very website)
> There is a difference between legitimate business interests after large investments, and freeloaders.
What large investment are you talking about? The thousands of hours of free labor that went ffmpeg? Yeah. See also all of the open source software that went into the operating system and utilities they built everything on. That doesn’t stop people from selling proprietary software, so why is this any different?
This is free market capitalism. If they can find people to pay $29 for a copy of this wrapper, more power to them. That also happens to be a much more powerful resume bullet point.
And you might be underestimating the work that went into those libraries.
We are talking decades of work, dealing with platform issues, performance, loads of security considerations and then there is the whole licensing+patent topic.
Sure UI work is hard, but of the whole package, it's only the visible part of the iceberg and now I'm expected to give $30 to the person who only contributed that last piece? Of course it's work too but if not at least half that money is being donated to the underlying FOSS projects then I'm out.
Another suggestion: open source your app. Those who don't know how to compile/build it, or are too lazy, which will be most, they can pay for the convenience, and you'll have the income you expect, but at least you are giving back to the community on whose work you are basing yours.
> self-taught full-stack developer who wrote the first line of code in the 2020 Corona lockdown.
You my friend are standing on the shoulders of giants. Time to ack them.
For my own open source libraries, I have made a conscious decision to say "I don't care". And the language in which I'm saying it is legalese. It's all in the license.
If I felt that people should give me a cut of any commercial software they build on top of my library then I would try to express that in my choice of license.
I have the same view on my own FOSS work (also expressed in its license) and generally also don't think that FOSS authors should feel that their users are obligated to give them anything (beyond the conditions of the license).
Though we as a community can still have views regarding kids hacking things trying to profit off decades of hard work by a large community. Different if they contribute back obvioisly. There are lots of compromise models out there.
> and now I'm expected to give $30 to the person who only contributed that last piece?
No, you are not expected to pay $30 to anybody.
If the distribution of the money make you unhappy, just pay $30 to this guy and $30 to the other project (or $150 using a x5 ratio sugested in other comment).
Or you can use the free CLI, or a free alternative.
Even better, you can write a free clone of this app and distribute it for free. Just remember to choose the licence carefuly. You'd probably like AGPL that makes comercial use very difficult.
Software isn't a one time use commodity. Other people can make UIs - guess why people still buy them! And if you're mad about commercialization, then they should have chosen a different license. You are paying for the UI, and only the UI.
Making a good user interface is definitely not easy. Yet it's orders of magnitude easier than writing ffmpeg.
That said, there is nothing wrong with a paid wrapper around a large and complex open source library. Distributing their work more widely is not a disservice.
> Yet it's orders of magnitude easier than writing ffmpeg.
If there's one thing that I've learned, is that "It Depends™" is my mantra.
ffmpeg is the sharp end of years of work by a whole lot of folks. It isn't just a single developer's "pet project" (although its originator[0] deserves enormous heaps of credit). It has been maintained by a whole community of really good (and dedicated) developers (and people with all kinds of other skills)[1].
It's not just a library. It's a platform. People have made entire (lucrative) careers, from just "tuning" ffmpeg.
Because of the infrastructure provided by ffmpeg, people can build some really useful implementations, and create focused applications.
I have found that making an approachable interface for a complex substrate, can be incredibly valuable, and definitely worth paying for. It can often mean the difference between soaring success, and miserable failure.
"Easier" is in the eye of the beholder. Ever watch a really, really experienced studio musician at work? They sit down, and in five minutes, your scratches on a piece of paper, take on a magical aspect. They make it look absurdly easy, but that comes from intense practice. There was a documentary (don't remember the name), about a bunch of major musicians, that came out of the California scene, in the late 1960s/early 1970s. In it, there was a discussion about someone (I think it may have been one of the Grateful Dead, or Eagles), that lived above Jackson Browne, who is a very successful singer and songwriter (BTW: The "songwriter" part is the bit that makes the money). They talk about hearing him practice, as he was developing songs. He'd play just a few bars, over, and over, and over again, until he got it right.
Songwriters and studio musicians may not be able to command roaring crowds at Glastonbury, but they can give you the album that you'll need, to get that crowd to show up, in the first place. So success requires contributions from many different places, and each has its own measure.
Water is free, but you pay some company because they bottled it. While free, it would have cost most people a ton to go find a source and carry it back. These wrappers are a good thing.
If you're concerned about open and free software I'm not sure using iOS makes a lot of sense. Of all mobile and desktop platforms, iOS has the highest barrier to entry for the free utilities you're hoping to find. Were you surprised you couldn't find your free utility on iOS?
The authors and maintainers of foundational code/utilities like ffmpeg/curl/etc. should definitely be the ones who have all the riches they could ever want. Thousands have made fortunes off of their work.
That said, what's the free and open source version of this tool? There are some great open source video editors like Shotcut, Openshot, KDENLive, Blender, etc., but I think this tool is more like CyberChef for video?
>what's the free and open source version of this tool?
PowerToys is Free and Open Source, and has at minimum an image resizer utility. It's a good starting point for adding on richer functionality like a preview GUI, and I'm sure that the basic video and audio manipulation would be appreciated as additions. Also since it's a Microsoft sponsored project, I imagine that the signing process is drastically different than what OP has experienced.
I know that's really not satisfying to say that "someday we could have this in the FOSS space", but everything starts somewhere.
Though editors like Shotcut and KDENLive are considered non linear since you can layer on different effects, while OP's utility is definitely not that.
Perhaps the author would consider open sourcing if they received financial compensation for their work to date? Crowdfunding or retroactive grants can liberate code.
Context: a big chunk of my 2024 income was from grant money to build open source software that I may have tried to monetize otherwise. It’s possible.
I think you have a warped perspective. Not everyone has the time or skills to use CLI tools. People will pay to save time. The market is multi faceted and complex and there's a market for everything. In this case you're just not the customer.
Making a friendly interface that doesn’t require the user to have to install a new tool is a value-add. Maybe the average power-user doesn’t need it, but it doesn’t seem entirely sinister.
I’m actually really fond of the model: here are the tools you can do anything with them but here is packed bundles that do something and the ecosystem is funded by selling bundles which often are just a UX for the tools and having them preconfigured.
Gives everyone the option of picking free or paid options, depending on people’s needs.
Upvoting you because my comment saying the same thing is getting downvotes and I really think the message is important.
However I don't think it's fair to call this a "shallow wrapper". It's clear that a lot of work went into the design of this GUI and, and making user-friendly interfaces is also an important work (that is far too often overlooked in the open sources communities).
Yet the fact that FFmpeg, the tool that does all the heavy background work, isn't even mentioned anywhere on the website, even in the FAQ or the footer is at least a non-negligible ethical problem.
UPDATE: The same goes for ImageMagick that I just saw this app installs and uses too.
I have not downloaded the app, so I don't know what it contains.
The licenses for both ffmpeg and ImageMagick do not require anyone to mention them in the website.
However, if they are being re-distributed, there are clear obligations for providing source code and attributions. Omitting to do so is a violation of the legal obligations.
Consider that someone talented enough to write a library probably has a much higher salary potential than a front-end hacker. Shouldn't the latter be allowed to eat, as undignified as you may find it?
* build a codegen for Idris2 and a rust RT (a parallel stack "typed" VM)
* a full application in Elm, while asking it to borrow from DT to have it "correct-by-construction", use zippers for some data structures… etc. And it worked!
* Whilst at it, I built Elm but in Idris2, while improving on the rendering part (this is WIP)
* data collators and iterators to handle some ML trainings with pausing features so that I can just Ctrl-C and continue if needed/possible/makes sense.
* etc.
At the end I had to rewrite completely some parts, but I would say 90% of the boring work was correctly done and I only had to focus on the interesting bits.
However it didn’t deliver the kind of thorough prep work a painter would do before painting a house when asked for. It simply did exactly what I asked, meaning, it did the paint and no more.
I keep seeing folks who say they’ve built a “full application” using or deeply collaborating with an LLM but, aside from code that is only purported to be LLM-generated, I’ve yet to see any evidence that I can consider non-trivial. Show me the chat sessions that produced these “full applications”.
An application powered by a single source file comprised of only 400 lines of code is, by my definition, trivial. My needs are more complex than that, and I’d expect that the vast majority of folks who are trying to build or maintain production quality revenue generating code have the same or similar needs.
Again, please don’t be offended; what you’re doing is great and I dearly appreciate you sharing your experience! Just be aware that the stuff you’re demonstrating isn’t (hasn’t been, for me at least) capable of producing the kind of complexity I need while using the languages and tooling required in my environment. In other words, while everything of yours I’ve seen has intellectual and perhaps even monetary value, that doesn’t mean your examples or strategies work for all use-cases.
LLMs are restricted by their output limits. Most LLMs can output 4096 tokens - the new Claude 3.5 Sonnet release this week brings that up to 8196.
As such, for one-shot apps like these there's a strict limit to how much you can get done purely though prompting in a single session.
I work on plenty of larger projects with lots of LLM assistance, but for those I'm using LLMs to write individual functions or classes or templates - not for larger chunks of functionality.
> As such, for one-shot apps like these there's a strict limit to how much you can get done purely though prompting in a single session.
That’s an important detail that is (intentionally?) overlooked by the marketing of these tools. With a human collaborator, I don’t have to worry much about keeping collab sessions short—and humans are dramatically better at remembering the context of our previous sessions.
> I work on plenty of larger projects with lots of LLM assistance, but for those I'm using LLMs to write individual functions or classes or templates - not for larger chunks of functionality.
Good to know. For the larger projects where you use the models as an assistant only, do the models “know” about the rest of the project’s code/design through some sort of RAG or do you just ask a model to write a given function and then manually (or through continued prompting in a given session) modify the resulting code to fit correctly within the project?
There are systems that can do RAG for you - GitHub Copilot and Cursor for example - but I mostly just paste exactly what I want the model to know into a prompt.
In my experience most of effective LLM usage comes down to carefully designing the contents of the context.
My experience with Copilot (which is admittedly a few months outdated; I never tried Cursor but will soon) shows that it’s really good at inline completion and producing boilerplate for me but pretty bad at understanding or even recognizing the existence of scaffolding and business logic already present in my projects.
> but I mostly just paste exactly what I want the model to know into a prompt.
Does this include the work you do on your larger projects? Do those larger projects fit entirely within the context window? If not, without RAG, how do you effectively prompt a model to recognize or know about the relevant context of larger projects?
For example, say I have a class file that includes dozens of imports from other parts of the project. If I ask the model to add a method that should rely upon other components of the project, how does the model know what’s important without RAG? Do I just enumerate every possible relevant import and include a summary of their purpose? That seems excessively burdensome given the purported capabilities of these models. It also seems unlikely to result in reasonable code unless I explicitly document each callable method’s signature and purpose.
For what it’s worth, I know I’ve been pretty skeptical during our conversations but I really appreciate your feedback and the work you’ve been doing; it’s helping me recognize both the limitations of my own knowledge and the limitations of what I should reasonably expect from the models. Thank you, again.
Yes, I paste stuff in from larger projects all the time.
I'm very selective about what I give them. For example, if I'm working on a Django project I'll paste in just the Django ORM models for the part of the codebase I'm working on - that's enough for it to spit out forms and views and templates, it doesn't need to know about other parts of the codebase.
Another trick I sometimes use is Claude Projects, which allow you to paste up to 200,000 tokens into persistent context for a model. That's enough to fit a LOT of code, so I've occasionally dumped my entire codebase (using my https://github.com/simonw/files-to-prompt/ tool) in there, or selected pieces that are important like the model and URL definitions.
reply