Hacker News new | past | comments | ask | show | jobs | submit login
The FFmpeg/Libav situation (pkh.me)
150 points by ux on June 30, 2012 | hide | past | favorite | 51 comments



I know its a limited view of reality, but I was debugging ffserver, found the bug and mentioned it on FFmpeg's IRC channel. Michael added my own one-liner change and listed me as the author for the git commit. This is a GREAT feeling for a newcomer to a project and it makes me want to keep helping with FFmpeg.

I agree with the author of this article in that FFmpeg will eventually win out, just for little things like this.


That's true. I'm always annoyed when I send a patch for a project, it's integrated and that's it. No thanks. No mention of what I did. Just as if the author just found something and added it. Sometimes it's one liners (which may take hours to debug as usual), sometimes its a few thousand lines.

This behavior kills the will to contribute quickly.


That's what github is doing so nicely with pull-requests: you maintain authorship of your fixes (it's even cryptographically secured,) and the entire process is visible and documented on the project's source home page.


That is not unique to github pull requests, is it? I thought that was common to all of git, and the reason why git has separate "author" and "commiter" fields


yes, as they gotta pull from you for that. github just makes the process easier and it is more trouble not to attribute via github.

in both cases you can make a patch (=copy/paste) instead. and in the first case you often actually provide a patch because it might be easier or because they're using something else than a git or git-like vcs


yes and no, i mean, one can copy paste and change a lil and it works. That being said Github makes it harder not to do that (but some people are like that)

That being said too, not everyone uses github ;-)


I had a similar experience. I forked ffmpeg on github and fixed a few bugs that I was sure no one would care about but us. I figured I'd polish it up over a couple of weeks and send a patch for possible inclusion upstream. But the next morning I had an email waiting from Michael. He had found the fork, found my "experimental" branch, and emailed me to compliment me on the code, suggest a couple of changes, and ask if it would be cool to cherry-picked some of the commits into the mainline. I've been involved in a lot of opensource projects and have _never_ seen something that gracious. He obviously has taken whatever lessons he learned in the schism to heart. Between that and the fact that his code base is (more often than not, but not always) more stable and functional, I have little doubt which way things will flow eventually, perhaps after time has allowed more of the bad feelings to dissipate.


I read the entire article carefully, thinking about various other schisms in the Open Source World (Net/OpenBDS being the one that came to my mind first) - and reflected on how important leadership is in many of those communities. For better or worse, the (in most cases) benevolent dictatorship of people like Theo (OpenBSD), Linus (Linux), and Guido (Python) play an important role in herding their respective cats.

It might be interesting to see how effective (if such a thing could even be measured) communities with benevolent dictators perform as compared to those that operate by consensus/voting.


It's interesting that you mention Theo, since apparently the reason OpenBSD exists is because he couldn't get along with the other core NetBSD developers.


What it shows is that the characteristic of being a good "benevolent dictator" isn't the same as "getting along" with your peers. Theo was a poor collaborator in a larger project. Linus probably would be too.


Theo's collected e-mail from that time: http://www.theos.com/deraadt/coremail.html


are there any significant projects other than debian that have elections?


For some value of significant, Squeak. The turn out has been down for the last few elections, with the last one getting around a hundred votes, with around 450 people authorized to vote.


Some of the decline is probably linked to the Pharo fork http://www.pharo-project.org/home


The FreeBSD Project has elections, and is in short order going to announce a new Core Team.


Gentoo developers elect its Council, and the Foundation members (large overlap with the developers set) elect the Trustees.


Fedora.


The EU, for some value of 'elections'.


I tried to get involved in FFmpeg development several years ago, and felt driven off by the existing community. It wasn't so much any one individual, but the absence any "adult" voice trying to keep things moving forward. There are a lot of smart programmers there, and a lot of loud people on the list, but no real correspondence between the two traits.

The quality of the code base also left something to be desired. It was fast, and generally worked, but they certainly didn't subscribe to the camp that believes in treating compiler warnings as errors. I remember using Valgrind and discovering that some memory was being used uninitialized, and submitting a patch. The patch was rejected with a comment such as "not needed". I still don't know if this was correct, but I do know that being a static initializer my patch wasn't going to slow anything down.


This really changed a lot, see http://fate.ffmpeg.org/

We now test all the code through valgrind; a single memory leak will complain. There are also static analysis, and various other checks.

And BTW, you can't treat warnings as errors. Just try to fix the warnings we have and you might understand why it's not possible easily (might require API changes, compiler false-positive, deprecated warnings that can just be silenced, etc). Of course, you might be able to fix some easily, and the patches will likely be accepted (assuming they are correct).


Glad to here that more testing is done. The "warnings as errors" is controversial, but it's practice I like. Once your compile has hundreds of "safe" warnings flashing across the screen, you aren't going to notice the "real" warning caused by your latest change. Whereby if your code compiles cleanly and quietly, you'll definitely notice the new one. Defaulting to -Werror is just a way to enforce this convention.

You are absolutely right that not all the warnings are accurate, and that not all code should be changed to satisfy a every compiler. But I'm arguing that there is a long-term benefit to choosing a single compiler, and making the modifications necessary to allow this compiler to run without warnings, even if the behaviour is provably correct without doing so.


I'm pretty sure a project as widely used as FFmpeg can't just choose a single compiler (and you'd actually have to choose a single compiler version since for instance new versions of GCC can introduce new warnings).

On top of that, sometimes warnings come from parts of the code you don't control, like a scanner generated by flex.

I think it's better to have a strong policy of not introducing warnings and fixing them whenever a particular compiler is foud to emit them, but having -Werror in the default Makefile can quickly inundate the project's mailing list/IRC channel with "doesn't compile" complaints from people with exotic compilers.


Mosh has an optional ./configure --enable-compile-warnings=error flag that turns on -Werror. It's off by default for users who compile from source, but the Debian, Ubuntu and Fedora packaging uses it to prepare the binary package. We think this is a good compromise.


We might be agreeing. I'm not arguing for -Werror on by default for a project that is designed to be compiled from source by end users using their own compiler. I am arguing that a clean compile is valuable for developers. Yes, this may get down to choosing versions.

I think SQLite has a good attitude: http://www.sqlite.org/faq.html#q17

"Some people say that we should eliminate all warnings because benign warnings mask real warnings that might arise in future changes. This is true enough. But in reply, the developers observe that all warnings have already been fixed in the compilers used for SQLite development (various versions of GCC). Compiler warnings only arise from compilers that the developers do not use on a daily basis (Ex: MSVC)."


Static initializers aren't always free, FYI. They can slow down app startup (this is a serious issue for Firefox).


I think the word static was misused in nkurz's post. static variables are always initialized, at least to 0. Initializing a stack variable shouldn't harm performance.


Correct, I was trying to describe initializing a stack variable to a constant array, and shouldn't have used the word static. In retrospect, I think the cost of this is probably just the same as zeroing it, since behind the scenes it has to be copied from a constant? But the rejection wasn't due to the performance harm, but was based on a philosophy that one shouldn't change "provably" correct just to make it more compliant.


I may be using incorrect terminology, or at least I think you are presuming a more object oriented interpretation. I don't recall the exact circumstance, but it was a case where one would normally call calloc() rather than malloc() once for a buffer. This proposal to switch to calloc() was rejected due to expense of calling bzero(). I think I proposed making it a array with an initializer rather than a pointer, taking a couple KB size penalty in return for quieting a warning and removing some code smell.

The code wasn't merely copying an uninitialized variable, it was branching on the random data that happened to be in the data returned by malloc(). This confused Valgrind, but was thought to be safe due to some later check. To me this was fragile code and unsafe practice. To the decision makers, it was a good way to save a couple cycles and a few bytes. The culture (at least at the time) felt that efficiency trumped clarity and maintainability. This is a blessing and a curse.


I was curious as to which distros default to ffmpeg and which default to libav, looking at my distro of choice (Arch Linux) it's ffmpeg, there's no libav in the official repos as far as I can tell and programs like VLC, Blender, MPlayer2 all link to ffmpeg, Mplayer on the other hand links directly against the core codec libraries unless I'm mistaken.

The article states that Debian (and Ubuntu?) uses libav (and yes, the bs about ffmpeg being deprecated is obviously a douchebag move), so does anyone know what other distros are defaulting to, Gentoo, Fedora, OpenSUSE etc?


Gentoo offers both libav and ffmpeg in Portage, though they cannot currently be installed at the same time. However, I would not be surprised if they are eventually changed to be installable at the same time and use the eselect system to switch between them (similar to the "alternatives" system in Debian).


My only direct interaction with ffmpeg and libavcodec/libavformat has been from a user and developer stand point. Most of it has been very poor, the code works well, but their API both the C interface and the CLI could be a lot better.


Couldn't agree more on the C API. The time I had to use it was a nightmare. There's almost no documentation, and the interface changes so often it's nearly impossible to find working sample code.


We did some work to improve this a little, see for example: http://git.videolan.org/?p=ffmpeg.git;a=tree;f=doc/examples;...

This should be deployed with any new fresh ffmpeg install.

The filtering code is still unstable due to what's explained in the blog post...


Its a start, but missing the point nonetheless. The grand idea is to have all the codecs and formats unified into one API, but the reality is that said API has become a gigantic dump yard. Redundant options and variables with one codec using one set and a different codec another. Not to mention to have stuff fail because a particular codec does not like a specific configuration.


It's good to see things improving.

To be honest, the biggest problem I had was that searches would return too many out of date code snippets and sample code on other sites that didn't match the exact version I was using, so it wouldn't work for me.

Other than that, once I got it working, it was awesome.


I agree. I bet that 99% of "movie players" are direct clones of ffplay.


This all sounds very Hatfield/McCoy to me. Whether or not libav is a PITA/POS, I don't know, but of the two, I had only heard of and used ffmpeg until now. That said, I really don't give a crap how either of them operate, and neither should anyone else but the maintainers. This reminds me so much of the volunteer work that happens at our school. So many personalities involved and people get so upset because they've poured their heart and soul into it. You don't find that as often on the business/corporate side of things. Even though real money is on the line there, it is just a job. I think the same thing applies here. These people are getting so upset, but I doubt they realize that 95-98% of users don't give a crap. We are just glad that you are providing great libraries and utilities. So... take a chill pill and get over it. And thanks for your work.


"This reminds me so much of the volunteer work that happens at our school. So many personalities involved and people get so upset because they've poured their heart and soul into it."

Its the 'meaning' thing that makes your volunteers give up their own time and do free work. Open source projects will probably have similar loyalty/emotional investments.


Has Hendrik Leppkes ever discussed why he chose FFmpeg for his popular LAV Filters project?


well this is fitting. lost ~4 hours yesteday trying to compile ffmpeg properly to work with newly enabled codecs and kdenlive - errors on errors on errors - with very little resources to help out. what a mess.


Debian's maintainer stance isn't really an issue since Christian Marillat is the defacto Multimedia maintainer: http://www.deb-multimedia.org/


For me the link is an unreadable low contrast website. Come on folks; make your websites accessible.


While it certainly isn't a full-contrast color scheme, if you're having trouble reading it, you probably have a messed up color profile or gamma setting. I'm no big fan of low-contrast color schemes, and I'm partially color-blind, but I had no trouble with that site.


I'm not sure what your definition of contrast is, sir. That was a fairly bright grey on a fairly dark grey. Returning to HN from this article was an unpleasant retinal shock.

Confusing a bright, backlit monitor with a piece of paper is some pretty nasty 1990s logic.


Note the use of the word "accessible" in the original post. This is not a synonym for "pretty" or some way of demanding a particular aesthetic style for its own sake. The reality is that some degree visual impairment is not at all uncommon and that difficulty with contrast can result from any number of conditions.

I'm not sure about the original poster's statement that that site isn't accessible. I haven't tried applying a stylesheet from my browser to it, and I don't use anything to enhance contrast of text in web pages, as my vision is not impaired in that way. There's an argument to be made, though, that modern websites use coloring, layers, and a bunch of other things to achieve layouts, looks and feels to the extent that if you want to see content as it is presented, or to even find it readable or usable at all, you'll have a hard time with applying styles using your browser.

Believing that just because the web can be used differently to black-type-on-white-paper in a wide variety of ways means that every way of using it differently is virtuous is pretty mid-2000s web 2.0 logic. Contrast is useful, and its proven track record for rendering text is no mark of antediluvian shame, of irrelevance or datedness.


Would you call that low-contrast, though? My comment suggests that both colors are quite some distance apart, so despite the inversion, the contrast is quite reasonable. It just has the added bonus of, you know, not burning the crap out of my eyes.

You seem to have read something in my comment that wasn't there.


Same for me, but I do have my screen's brightness turned down so that white backgrounds don't glare.

FWIW, I discovered the Firefox plug-in Tranquility recently. It is sort of a light-weight Readability. Although it has far fewer tweaks available, it compensates by using much less resources. Readability was pretty much unusable on my low-spec laptop.


Give Opera a try. It has an inbuilt "accessibility layout". Black text on light green background, big font, fixed width.


https://chrome.google.com/webstore/detail/ppelffpjgkifjfgnba...

iReader for Chrome is a good Option. I tend to read most articles through it. Other browsers have their alternatives.


It seems your comment's contrast is getting lower by the minute. I wonder why... :)


Feel free to use plugins like Stylish :)

Do you have any suggestion for the CSS?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: