Hacker News new | past | comments | ask | show | jobs | submit | stsp's comments login

Many years ago I was using fossil for OpenBSD development to manage my patches.

Around that time I tried to import the entire OpenBSD src repository into fossil, by importing the CVS-to-git conversion of src, as published on Github. I was following the official git->fossil migration guide. I left this running for a week (or two?) at which point the fossil git loader was loading OpenBSD commits from somewhere around the 2000s. At that point I stopped the process. Performance might be better today, I don't know. And perhaps post-conversion run-time performance is much better, but I never got that far. Anyone can try to reproduce these results by running the same conversion today.

I don't think I ever talked about my attempts with fossil to anyone at the time. But I recall the topic coming up somewhere when the Game of Trees project became public, and someone suggested I should be using fossil instead.

I am now using Game of Trees for all my OpenBSD development work and I am happy with it.


Git is not insufficient. For various reasons, Git is not a good match for what OpenBSD needs. OpenBSD needs an implementation that uses privsep, pledge, unveil, fits the mindset needed for Theo to accept running it on his own infrastructure, doesn't carry more baggage than necessary, and is a joy and easy to work on for OpenBSD developers independently from third parties. So the options were forking Git or writing something else, and I chose to do the latter.

See the goals page for more: https://gameoftrees.org/goals.html



Hi, I am the Game of Trees project founder and main author of the code.

If there is anyone here who would be interested in seeing this project advance faster and has funding available, please talk to me. I am a freelancer with an EU VAT ID.

Progress since the beginning in 2017 has been steady but slower than I would like. I have occasionally applied to various open source funds (prototype fund, NGI zero, and the like) but was never lucky enough to get funds allocated (which is fair: many other great projects are being funded instead, so I am not bitter about this).

And I don't want to bother the OpenBSD Foundation since they are already partly funding unrelated work I am doing in the OpenBSD wifi drivers and 802.11 stack. I also believe that the ability to run this alternative Git client on any nix, and the alternative Git server on OpenBSD (though there are plans to port the server to any nix as well) can be useful for many communities and organizations beyond OpenBSD.

Some things I would like to work on in particular are:

- SHA256 object ID support, enabled by default, with repositories running either SHA256 or SHA1, without the ability to mix different hashes in the same repository. The server could offer a read-only repositories converted to SHA1 for legacy clients which do not support SHA256. Git itself does already support SHA256 so this won't break compatibility with regular Git clients. Though it might not be possible (yet?) to push SHA256 repositories to many hosting sites but that is not Git's fault.

- Server-side "trivial-rebasing" of changes, such that clients could push changes to servers without having to fetch first, provided pushed changes can be merged tree-wise, ie. without any file content clashes or unclean additions/deletions of files.

- Performance improvements; Got currently spanws one privsep child process per pack file on disk, cycling children in and out as needed when there are too many pack files. This can cause a lot of forking during random access across the entire history, which occurs when computing deltas while packing. Small pack files should be stored in memory instead, and each child process should be able to handle multiple packs to reduce the amount of forking.

You can skim the man pages to see all the work that has already been done: https://gameoftrees.org/manual.html And of course you can read the source code; see the web site for details.


As with OpenBSD, there has been discussion for ages about moving NetBSD from CVS to something else. However, nobody has slimmed down any of the considered VCS to be small enough to add to NetBSD's base. I think Got is worth considering. I'm looking forward to reading more and trying it out myself. Thanks!


I will be following your project closely. I’m sure plenty more will too! Lots of room for improvement on git, and sure to garner users!!


absolutely not. openbsd decide to reinvent the wheel for their own bubble let them pay. you say server is openbsd only too in post. maybe not you but other openbsd devs dont give back either (despite project docs and policy) and even change the openbsd docs to no longer say contributor should give back to upstream (this is documented). why should there be trust this will care about others and the money is for this specific development?

tldr prove its not another openbsd NIH project and people outside openbsd care or f off


OpenBSD regularly produces "NIH" projects that the rest of the industry adopts, takes from, and (usually) never gives back. Even where not commercially successful, projects like GoT or LibreSSL ensure we're not living in a monoculture.

If you don't have anything kind to say to them (even "thanks" would be more than most people can be bothered with), then better don't say anything.


Imagine telling a FOSS developer to “f off”? This language and sentiment has no place on HN.

Edit-Read the commentators other comments. They’re almost exclusively negatively targeting OpenBSD since 2021.


There are lots of GPL zealots that hates anything BSD.


Hi, I am the person you are accusing of mischief.

I didn't break any agreement. I agreed with Mathy on what to do, and that's what I did.

The fact that Mathy decided to get CERT involved and subsequently had to extend the embargo has nothing to do with me.

(edit: typo)


To be clear, I accuse you of nothing less than playing a rational response to the researcher's apparent "always coöperate" strategy. "Defect" in a prisoner's dilemma context does not mean "breach" in a legal one. (For example, an OPEC member defecting has zero legal consequences. It does, however, affect their standing in the next round of negotiations.)


'Defect' doesn't mean 'breach' in a legal situation, it also doesn't mean 'sociopath and/or economics professor' in a psychological one, but people form connotations, so be careful what you accuse. Anyway I think you're playing the PD analogy too much... But I'll play a bit too. Construct a payoff matrix. What does real defection look like? It's patching mid-July, when the patch was received, instead of waiting to the agreed upon end of August time. I see no defect here. There could only be one if, after CERT was involved and set a new date, Mathy asked OpenBSD to postpone the prior date agreement, and instead of cooperating they patched immediately for the biggest gains to their users. There is no mention of such a request, hence it probably didn't come.


I support your decision.

If Mathy was concerned, why did he wait to notify CERT? Should that not have been the first priority?


OpenBSD wifi maintainer here.

I was informed on July 15.

The first embargo period was already quite long, until end of August. Then CERT got involved, and the embargo was extended until today.

You can connect the dots.

I doubt that I knew something the NSA/CIA weren't aware of.


In other words, its malfeasance by the security community for holding out.

There's only a few courses of actions. One is to sit quietly and let everyone eventually do the solution. And that doesn't work. No fire under peoples' asses, and the work is delayed.

The other, is to release it promptly. Then, at least we can decide to triage by turning down X service (even if wifi), requiring another factor like tunnel-login or what have you.

But truthfully, defect in a Prisoners Game played out here was the best choice. The rest of the community is "agree".


No one should care about a community that agrees that releasing silent patches is a good idea. This is exactly the same behavior that created the need for full disclosure in the first place. And no, there aren't just two options nor are processes binary. It's rather mind boggling how "the community" has managed to go full circle in such a short time and themselves become the opinionated people they were supposed to be the alternative to.


Really makes me wish you'd told the world. I know all the arguments against that, but this sort of thing is no good either.


Yes, but that would result in them not getting notified for any other vulnerability.


Your impression that nothing ever happened does not align with the facts.

The project applied 2 years in a row, and mentored several students. Some developers mentored more than one student.

https://www.google-melange.com/archive/gsoc/2014/orgs/openbs... https://www.google-melange.com/archive/gsoc/2015/orgs/openbs...


For this particular open source project GSoC brings no advantage to the table (yes, it may be great for other projects).

OpenBSD does not need GSoC to attract contributors. The project gets a good amount of new contributors on a regular basis, and they get onboarded quickly without causing much distraction, if any.

The mentor/student relationship is atypical for open source projects which are used to operating as a community of equal peers. Mentoring students who expect to be mentored takes a lot of time, and the vast majority of them don't come back. In my experience money is a key incentive for students in GSoC and that makes it hard to keep them as volunteers. Unless you are very lucky as a mentor and pick a student who turns out to be an open source enthusiast, they won't actually care about your project in the long term. And there is no way of knowing that during the application process. Unless in special cases where you already know the student, as I did in one instance, but that's an exception.

(Speaking as an OpenBSD dev, and as a former mentor of several GSoC students, over several years, at the Apache Software Foundation).


As a former GSoC mentor, I think it's important to have an onboarding pipeline in your project, and disagree with the notion that the mentor/student relationship is somehow atypical - such relationships pop up all the time in the natural course of projects, and are key to the health of most of them. It's up to you to select the students who you think will stick around. Given that, taking the time to onboard junior people is a really rewarding investment in the project.

(My student ended up going on to work for Red Hat. I don't presume I had a lot to do with it, but I think the culture did.)


What I think is unnatural is the situation where the student is being paid, and where the mentor has a formal responsibility for the student and acts as the person who ranks the student and thus decides upon their salary (fail the student -> no money).

In a normal situation, new contributors show up and are self-motivated, and receive guidance from others so that over time they become equals. The mentor's role is spread among several people, and it is informal and temporary. There is no money involved.

Many (not all!) GSoC students do not experience what the normal situation in open source feels like.

I am happy that your student is an open source enthusiast and got a job in open source. That is great.

I have seen this kind of good experience, but also more disappointing ones. In one case, a student simply disappeared after the first payment (in the middle of the summer) had been issued.


>The mentor/student relationship is atypical for open source projects

This is exactly one of the flaws in most open source projects, which projects like GSoC and Outreachy aim to improve. Mentor relationships are one of the keys to building a more inclusive community, and reaching underrepresented groups.


I'm interested to hear more. Could you elaborate?


As a former student I would like to emphasize that it is not about the money. I would have worked on the same project in the summer even if I was not getting paid. There can be various reasons why students won't return or become permanent contributors. For example in my case being a double major in very distant disciplines I do not have the time to contribute when school is on. Then summer I will be working on my thesis at another university which again would leave me little time to make any significant contributions. I am still trying my best to help new people by reviewing requests and making small changes that don't take too much of my time. I plan (hopefully) to get back to contributing on weekends once I am done with school.

For a few friends I have found internships being another reason why they did not go back to their orgs. 5k$ seems like a big amount but it really isn't (even in India!)

Finally, one last reason I can think of is terrible mentors. I have been really lucky to have amazing mentors but I have heard a few horror stories from others.


Having been on the receiving side of GSoC students, I'd say 90% of them come from poor countries and are looking for the money. I've seen some that weren't students at all and seemed to be working for "consulting" companies already.


Can't really comment on this as I know no such people. The "consulting" companies part is hard to believe given the rules for GSoC. The new payment adjustment taking into account PPP is a welcome move in this regard even though I don't agree on the numbers they have decided.


> I would have worked on the same project in the summer even if I was not getting paid

Yes, this is exactly what GSoC can be good for. Ideally, it allows people like you to spend time doing what they love doing instead of working for crappy startups.

The good (and fun!) experiences I had as a mentor all shared this element.


In my experience money is a key incentive for students in GSoC and that makes it hard to keep them as volunteers

I think this nails it.


>In my experience money is a key incentive for students in GSoC and that makes it hard to keep them as volunteers.

I think that's a bit pessimistic, bottom line is that during the summer, a lot of students take on jobs to improve their finances (at least back in my days), back when I was a student I would have loved being able to work on projects I love/interested in and getting paid for it, rather than, as in my case, pretty much waste my summer time working in computer/video game retail to earn some bucks.

So yes, money is an incentive, but you could probably make that money in a regular summer job, the big boon as I see it is to work on something you find interesting during the summer, which in turn increases the chance that you will want to continue working on it once GSOC is over.


For a working dev who occasionally uses OpenBSD, is passionate about the project & its goals and wants to contribute, how would you suggest I start?

I tried keeping up with the mailing lists for a while, but I found it difficult.


Read the porting handbook and work on updating ports of software you're interested in. That's an easy way to get your hands dirty quickly. You can learn a lot about how OpenBSD works and you can work with upstream projects to make their software port more easily to OpenBSD.


Thanks. This is solid advice and also how I used to contribute to GoboLinux!


+1 -- This is the one and only problem I have to regularly help my non-technical Ubuntu friends (and their friends) with. Every few months they cannot install updates anymore because their /boot fills up and apt fails to install a new kernel package.

The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

A better fix would be to purge old unused kernels automatically but as far as I understand there were some difficult edge cases around that.


> The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

Sure, I'll just use 1/6th of SSD to store 60 megabytes.

  $ du -hs /boot/
  56M	/boot/
If 512M is not enough space for /boot you're doing something wrong.


>If 512M is not enough space for /boot you're doing something wrong.

I don't know what planet you're living on but it's certainly not this one. Between a Ubuntu desktop, a laptop and personal server with multiple Ubuntu VM's on it, all of which are kept up rigorously to date, I fix this problem at least three times a year, every year.

The command line process to fix it[1] is a multi-stage mess of dense bash-foo that comes with a 140 word, two paragraph explanation so that /ubuntu veterans/ can figure out what is going on without resorting to scouring the man page for flags. The friendly GUI process to fix it relies on a third party tool that is no longer maintained[2].

It is not possible to explain to non-technical users what is happening here, which means the only thing they can do when they see this is call their technical friend and cry for help. This is exactly the kind of user experience that makes people think Linux is not ready for widespread desktop use.

This is definitely something the OS should take care of itself. I'm ignorant of the challenges that caused it to be this way in the first place, but in my ignorance I would advocate that:

a) the partition be made larger by default b) the OS auto-purge any kernel package more than three revisions old

[1] https://askubuntu.com/questions/89710/how-do-i-free-up-more-... [2] https://launchpad.net/ubuntu-tweak/


Here is my old-timey one-liner personal solution[0] for it that has worked flawlessly so far, obscure theoretical edge-cases be damned, because the non-edge case situation is just awfully worse and practically impactful.

(warning, rant inside)

[0] https://gist.github.com/lloeki/520acee8ba3b44c532c7


Um, isn't the fix `sudo apt auto-remove --purge`, which autodetects unused kernels? What am I missing?


If you do not run that command before /boot fills up, and you have a full /boot with a partially installed kernel, then that command fails. So this works fine if you remember to call it regularly, but it does not solve the problem once it occurs.


Interesting. I haven't encountered that edge case. I've many times filled /boot and resolved by doing an auto remove.


It seems silly to me that I need to manage this myself. Why do I need to be worrying about different kernel versions? I just want to make websites.



Following the chain of links and answers and explanations, we come to the conf file that says in the comments that it commonly results in two (2) kernels being saved, but can sometimes results in three (3) being saved.

IOW, it does automatically remove old kernels, it just keeps the last 2-3.

So, yes, run "apt-get autoremove", that's it.


I think it has solved the problem for me, but still is not a good solution for anyone who would answer "What's a terminal?"

I love having a terminal with bash and use it constantly, but I don't think it should be needed for the system to just go on working.


I've been using Ubuntu either part or full time since 2007. I've literally never encountered this.

Which is not to say you're lying, I'm just sort of flabbergasted that this is an issue for so many people. Do you run autoremove much? Maybe that would solve it for you?


I run ubuntu 16.04 on a laptop, desktop and a TV streamer and I get this all the time. My boot partition on the desktop is 15gig and it gets plugged every now and then.


I've hit this before, but honestly do not think it's a big deal. Sure the installer could default to a larger boot, but it's manually configurable during install. And cleaning it up once in a while is just good sys admin practice.

sudo bash -c "apt auto-remove --purge; apt update; apt upgrade" is what I usually run.

Prefer they focus engineering cycles on actual engineering problems.


Sorry, but Ubuntu is doing something wrong here, not me. This should be handled automatically. Ubuntu wants to be the system for everybody, but you can't expect people to open the terminal and fix this manually. Making boot 20GB is ridiculous, but 1GB should be no problem, and for me 2GB would be OK if that means that this problem will disappear forever.

And I believe my boot partition is only 256MB, and I didn't set it to that. That was a system default.


Ubuntu is absolutely doing something wrong here and we'll get that fixed. Thanks!


Yup! That "something wrong" is installing every single kernel update for two, three, four years and not deleting any of the old kernels.

Super common in enterprise deployments. I ran into this a bunch on my $EMPLOYER-issued workstation.


The installer should handle this. When you apt-get upgrade anything besides the kernel, does it leave the old version lying around?

I understand that it may be wise to keep the old kernel around so the system can be booted in case there is a hardware incompatibility or breakage in the new release, but that justifies only one additional kernel. Ubuntu keeps those kernels sitting there until you `apt-get autoremove`, and that means that unless you're running that command routinely, the boot partition is going to fill up at some point, no matter how big you make it.

This is especially a problem for people who use the unattended-upgrades package. I've autoremoved and had it clean up almost a gig of old kernel images before.


If you're running updates weekly it will fill up on Ubuntu. This is a recent problem, and I've only experienced it on my laptop with full-disk encryption.

The update process generates on the order of 100mb/month.


It's not new. It's been happening to me since I started using Ubuntu in the 8.x range.


Doesn't `apt-get autoremove` remove those old kernels? Not that it's a solution; it should of course be done automatically! Here's what I get when using it:

    > apt-get autoremove
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages will be REMOVED:
      linux-headers-3.19.0-79 linux-headers-3.19.0-79-generic
      linux-image-3.19.0-78-generic linux-image-3.19.0-79-generic
      linux-image-extra-3.19.0-78-generic linux-image-extra-3.19.0-79-generic
    0 upgraded, 0 newly installed, 24 to remove and 39 not upgraded.
    After this operation, 1,732 MB disk space will be freed.
    Do you want to continue? [Y/n]


Whenever a kernel is updated autoremove should be called immediately afterwards. It should be called before the restart now / restart later dialog box of update-notifier appears.

Currently, Ubuntu installs a new kernel and update-notifier tells the user a reboot is needed. The autoremove notification only appears when using the terminal which explains why users are running into this issue. Also, update-notifier informs the user another reboot is needed after autoremove is run.

To avoid this mess I’ve commented out the lines of /etc/apt/apt.conf.d/99update-notifier and wrote my own updater using bash and zenity and incorporated needsrestart. It’s not pretty but it works.


absolutely not, automatically running auto remove may lead to bad things; on occasion autoremove flags other more useful packages for removal.

For example I'm using LVM with my installation on my Ubuntu laptop and after updating the kernel and running "apt autoremove" it removed the LVM package leaving me scratching my head shortly on reboot as to why it wouldn't find my root filesystem (frankly i have no idea how it became "unneeded").

A more sensible approach is how Red Hat do it with YUM/DNF, that is, to allow a certain number of the same packages to be installed, "installonly_limit" in yum.conf. Doing this means that when a new kernel gets installed the oldest is removed to keep the the system at the limit specified.

On my RHEL/CentOS machines I tend to narrowly provision /boot to around 250-500MB. set "installonly_limit" to 2 and the system will keep the most recent kernel and one back. it works for me.


I see you’re point, though I too use LVM and haven’t seen that happen... weird. I could have been more exact with my response as autoremove does more than just remove old kernels. Anyway, it would be nice to see Canonical resolve this.


Care to share it? Maybe it could help others...


I thought about sharing it but like I said it’s not pretty. It involves editing sudoers and holding back config updates for sudoers and update-notifier-common which might cause problems in the future if you’re not aware. I’d much rather see Canonical address it properly.


>Doesn't `apt-get autoremove` remove those old kernels?

Of course it doesn't! Why would you assume such a silly thing? /s https://askubuntu.com/questions/563483/why-doesnt-apt-get-au...


I confirm that it doesn't autoremove. I had to empty /boot on some servers lately.

Anyway sometimes one wants to keep old kernels. I have an old laptop that runs OK with a 3.something kernel and has wierd video sync problems with any newer ones. Ubuntu 16.04 keeps running with that old kernel so I keep booting from that, maybe once or twice per year.

However the proper solution would be pinning a package and autoremoving the others.


Yes, it does remove old kernels. Read the very link you posted.


> It's better to err on the side of saving too many kernels than saving too few

But Muh Freedoms! I hate to be subject to one man's opinion of things /s


It's also kind of a garbage argument.

People who know they have broken kernels don't keep upgrading them, they stop and fix them.

People who don't know they have broken kernels also don't know they can boot with an older kernel, so they get nothing from the "backup".

We want to leave some time for people to realize their kernel is broken, so keeping three is probably just fine. Honestly, it would probably be adequate to just bump the oldest one off the queue whenever a newer one is requested. If you've got a tiny boot partition, maybe that means only two revisions. If you've got a huge boot partition it could be 20.

But just keeping them all and making people manually uninstall them gains you nothing, it's user-hostile for no reason.


Except, it would be nice to keep a few of the (recent) older kernels, in case things go awry with the new update.


This already happens: apt autoremove won't remove the package for the running kernel. It'll clean up "old" (N-1 and lower) kernels, but installing kernel N+1 won't allow kernel N to be deleted as long as kernel N is still executing.

Once you reboot/kexec into the N+1 kernel, it'll let you remove the N (now N-1) kernel, bringing you down to one. But at that point you've proven the new kernel works—at least well enough to get to a shell you can run apt autoremove from.

This is why autoremove isn't so auto: if it happened automatically after reboot, it might be running on a now-wedged system (e.g. one that can't bring up the display manager), removing the last-known-good kernel and leaving you with only the broken one.

I think the right middle-ground solution would just be for installing kernel updates to touch a file, and for Desktop Environments to notice that file and trigger a dialog prompt of "you've just rebooted into a new kernel. Everything good?"—where answering "yes" runs apt autoremove. On a wedged system, you can't answer the prompt, so the system won't drop the old kernel. (In other words, just copy the "your display settings were changed. Can you read this?" prompt. It's a great design!)


Fedora/RHEL yum has a much better solution: installonly_limit, defaulting to 3. Kernels which have been updated will only be kept up to this depth. The excess are automatically trimmed during update.


Wouldn't a good solution then be to run autoremove before installing a new kernel?

That way, you have kernel N running, first autoremove wipes kernels N-1 and older, then it installs kernel N+1, so that when you reboot into N+1, you'll always have known-good kernel N if it doesn't work.

It's a very similar solution to how a good programmer solves an off-by-one error, doing a shift/rotate shuffle on a for/while loop.


What happens when you have a high-uptime system where you repeatedly "apt dist-upgrade" and end up installing packages for kernels N+1, N+2, N+3, etc., all without rebooting into any of them?

I agree that if the user manually runs an apt [dist-]upgrade—or really any manual apt command—that that's a good time to do apt maintenance work. (Homebrew does maintenance work whenever you invoke it and there haven't been any complaints so far.) But kernels usually get installed automatically, so it can't just run then.

Now, if there was a specific concept of a "last-known good kernel" (imagine, say, the grub package generating+installing a virtual package when you run grub-install, that depends on whatever kernel you specified as your recovery kernel, ensuring it remains around), then your approach could work—you'd always have two kernels, the LKG for a recovery boot, and the newest for a regular boot.


Exactly what happens on Fedora.


I agree.

I'm running Ubuntu 16.10 currently. A kernel upgrade hosed my setup yesterday, and having an older kernel available saved my butt. I was able to do another `apt-get update` and things eventually worked with the latest kernel.


For Ubuntu Desktop, it may make sense for the package manager to keep only the latest 2 or 3 kernels, and automatically purge the rest.


I had the /boot filling up problem but had thought it was fixed, I'm on 16.04+. I'm pretty sure the last two kernel updates I did removed older kernels leaving me with the current one and previous one ... ?


You can configure apt unattended upgrades to autoremove by default, perhaps you did that?


Nope, still doesn't do it without manually invoking autoremove.


This is the main problem that keeps me from wanting to set up less technical family members on Ubuntu. It's possible to get in a spot where even a simple command won't solve this.


Solus uses https://github.com/ikeydoherty/clr-boot-manager now, which purges old kernels and modules, but keeps the modules for the currently running system so HW still works


> The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

What? This is ridiculous and unacceptable. I don't use Ubuntu anymore, can someone tell me what is filling up the boot partition?

I'm currently on ArchLinux and mine is 200MB and it's 14% full! I can't fathom what could occupy so much space.


It's the way kernel update come in apt. The kernel update is a new package, not an upgrade of a previous kernel package. Thus the old kernels are left in place and the new ones installed alongside. After about 3 kernels have been made available in /boot the previously recommended size for /boot is full and attempted update to a new kernel fails.

It can be manually fixed by removing older kernels ("sudo apt purge ...").

Perhaps I'm mistaken but i thought a fix was in place for this, maybe it was something third-party but apt definitely offered to remove unused kernel package for me recently.


These look nice but they focus on the physical layer.

Do you happen to know any useful English literature that covers the MAC layer of modern wifi standards (n, ac, ax)? Apart from the 802.11 standards, of course.



Thanks!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: