I feel like an SCM would have to be WAY better than git to make it worth using. The entire developer community is on git, so you are still going to have to know how to use git in order to access other open source projects. If you are already going to have to be using git for many things, why add a second SCM you have to learn to use?
There are significant pockets of non-git users in the developer community ;)
I find hg much more pleasant to use than git, and can generally still work in git based teams while using hg.
I find pijul's approach much more interesting than git's, and look forward to pijul (and others :) pushing forward how we do revision control.
I don't believe git to be the end of the evolution of revision control systems (which for me has looked like cp -> rcs -> cvs -> svn -> git|hg), and find git lacking enough that I look forward to what the next generation does.
Would I argue that you personally should branch outside of your git centric world? Of course not! Just as I've worked with many devs who live their lives gainfully employed only knowing Java, or sys admins who only know Microsoft, it is up to you as to if you wish to explore different approaches and styles to achieve your end goal.
Sure, their are developer communities around the other SCMs, and many are passionate about those alternatives.
My point is that even if you do get into one of those other communities, you are still going to have to learn how to use git because so much of the world uses it. You will need it either for your job or because the open source project you want to use is on it. You can't skip learning git.
If you don't mind that, then go for it. For me, I don't want to use my limited capacity for learning things on learning a second (or third, since I still have SVN usage somewhere in my brain) SCM.
That’s not true. I’m a gamedev and have never had to learn git. The closest would be downloading a zip from GitHub. Almost all game dev is done on Perforce due to the enormous asset sizes we work with.
Also in gamedev. Git doesn't work for larger AAA projects, as it often chokes on the amount of data that stored by production teams, even when LFS is used. Perforce can handle huge binary files relatively easily in comparison.
so you're saying contribute to absolutely no 3rd party code? The person is saying that if you spend time outside of your VCS bubble, then you will end up using git, even if you don't like it.
Correct. The dev teams working on the ‘client’ part of the game (ie, the game) don’t have time to be contributing code to open source projects, but we don’t use much open source code anyway. But other teams within the company will be using Git for server code, and other non-client side work.
I’m not saying that Git is never used in game dev (it absolutely is), I’m saying that not everyone will have to learn it, or ever touch it.
The thing about Linux in the early 00s was that to a significant portion of developers, it was already obvious where things were headed. Anyone not seeing the potential wasn't really paying attention. And one shouldn't care what people who aren't paying attention think.
How likely is it that Fossil will displace Git? If you believe it is likely, then learning it, and using it for projects, may be time well spent. But even then, it'll take so long that you will still have plenty of time to adapt. So perhaps it is something you should keep an eye on, but not worth investing in now. If you find Git to be a pain, then a better investment right now, which pays off right now, would be to figure out what you can do to get along with Git, and thus most developers.
You have to treat learning as an investment. Which means, you have to think about about returns on investment. And just like regular investments, it doesn't matter what people who don't pay attention think.
I think it kinda doesn't matter in this context, though. You can pick up enough in an afternoon to be productive in a new VCS. If I worked at a Mercurial shop, I wouldn't disqualify a candidate that had only used git.
But sysadmin'ing Windows vs. Linux is basically two different jobs. You certainly wouldn't hire someone to be a Linux admin who'd only worked on Windows servers before.
Eventually? Sure, nothing is forever. But the replacement won't be by something older like p4 or hg. The real question is: has the replacement been written yet?
The bar is a lot lower; sure, you might want to be able to use git, but if you're only using it enough to interoperate you don't need to know how to do fancy stuff and can get by with a subset of common commands (if you only clone, pull, commit, and push, then you can just about s/hg/git/g) and ugly fixes (why learn to rebase when you can make a fresh clone and hand-pick changes?) without feeling bad about it.
The next big thing will be semantic code versioning. I.e. a tool that understands language syntax and can track when a symbol is changed.
There are already experiments in this area[1], but I don't think any are ready for mass adoption yet.
The fact that it's 2022 and we're still working with red/green line-based diffs is absurd. Our tools should be smarter and we shouldn't have to handhold them as much as we do with Git. It wouldn't surprise me if the next big VCS used ML to track code changes and resolve conflicts. I personally can't wait.
> The next big thing will be semantic code versioning. I.e. a tool that understands language syntax and can track when a symbol is changed.
Hmm maybe. I think this will come, but it may be more broken up through build tools that include this functionality, not a singular VCS. I think aggressive linting is starting to help with whitespace and formatting adding noise to diffs. We're seeing progressively more integrated build tools (Cargo, NPM, vs anything C/Java).
Personally, my bets are on the next VCS being more "workspace" centric as a next-step evolution. Any big change is going to come as we already change how we work. We're starting to see a lot of various tools that are basically VM/Container workspaces that you work out of. Cheap to spin up for each feature, instead of branching/pushing/pulling on one local repo. I think "thin client" reproducible workspaces are the next evolution.
What will VCS look like when everything is always-connected and (maybe?) cloud hosted? Maybe it'll allow "sharing" of workspaces, so you can build big features as a team (instead of feature branches being pushed/pulled). You'd certainly not store the "fat" history of changes locally, and big assets/binaries/etc would be better supported. Maybe it'll be built into one of these virtual workspaces as an overlay-fs, so its transparently auto-saved to a central store instead of manipulating the existing file system like git.
If they ever fix the performance issues and get a little polish, the patch-based DVCSs (pijul et al.) will be a great improvement in usability. If I were guessing, I bet on that as the next step (given the understanding that predicting the future like that is a fool's game).
Sorry, I didn't use SVN enough to answer that. Basically, they're distributed VCSs like git, but where git stores commits, that is, snapshots of what the files look like at a specific commit, patched-based VCSs in the darcs "lineage" store diffs. Obviously the end result looks similar - git gives you diffs by comparing commits, darcs replays diffs to instantiate what the files actually look like at any given time - but it (allegedly) makes it much easier to handle arbitrary merges and do things like maintain a branch that's just like another branch but with a single commit on top.
Yeah the initial intro to git takes about 10 minutes. Git add ., git commit, git push. Done. You learn new features as they become required. There is no need to read the entire manual and understand how git bisect works when you are working on your personal blog.
Rebase isn’t dangerous in safety terms, the main danger is it can cause frustration when people use it without understanding it first. And as long as people know to not rebase public history, the frustration is usually limited and contained and easily resolved. It might be dangerous if frustration leads to spastic repo nuking, which can happen.
Btw A lot of people who say rebase is dangerous use git stash and have no idea that stash is one of the actual dangerous things to do, much more dangerous than rebase, and not any easier than branching.
“If you mistakenly drop or clear stash entries, they cannot be recovered through the normal safety mechanisms.”
Rebase is the opposite of dangerous, it brings everything back to a specific state and forces the dev to fix conflicts themselves rather than make it someone else’s problem.
What’s dangerous is using things one doesn’t understand and not asking how to use it or reading up on it.
I was thinking about this the other day. The learning curve for git isn’t steep at all, because it’s possible to learn git so slowly. It’s possible to use git with any level of proficiency. If you had to sit down and “learn git” before you could use it, that would be brutal and we would be using something else.
I stick to Atlassian Sourcetree git GUI for most git stuff. I drop down to git command-line for cherry-picking, squash-merging, updating a not-currently-checked-out branch, etc. Sourcetree works well with sub-repos. I use integrated kdiff3 for visual diff (when I need to go beyond Sourcetree's already-excellent diff viewing). Sourcetree's killer feature is easy staging/unstaging different "hunks" of files, or even individual selected lines.
I use mostly Jetbrains IDEs, I'm sure their git/VCS integration is pretty good, but I stick to Sourcetree plus occasional command-line for most stuff.
I'd probably fail a git-oriented interview section because I'd fail at the command-line for some common things for which I just use Sourcetree. In any case though with my approach I've seen I generally get a much better overview of code changes and branching strategies and have a more flexible and convenient experience that many other coworkers over the years.
I haven't used Sourcetree in several years, how's it hold up today? When I was using graphical clients I hopped between Sourcetree, GitKraken[0] and Sublime Merge[1].
I loved SourceTree but abandoned it as soon as Atlassian made it impossible to use offline. I don't want an Atlassian or bitbucket or whatever the fuck account. Sublime Merge is pretty good IMO, but not good enough I'd want to pay for it.
I’ve used SourceTree maybe a month or so ago, and it was completely offline for a repo that’s not pushed to or connected with Bitbucket or Atlassian. Maybe initially there was a dialog box to create an account with or sign up for Bitbucket, but that was easily dismissed.
SourceTree was originally shareware – I'm pretty sure I expensed a license at whatever job I was at. When Atlassian bought SourceTree in 2011 their grand plan was to pimp it out in order to drive customers to Bitbucket. Eventually they were going to charge for it (so the worst of both worlds really). I noped out because it was clear that Atlassian wanted SourceTree to become a bitbucket client instead of remaining a git+subversion client.
I don't want a bitbucket account and never bothered to check how often it would phone home to ensure you had an up-to-date bitbucket account. I think the Sublime folks have a better track record here and I fully expect Sublime Merge to get to the point where I'd want to buy it if I wanted a git GUI.
Unfortunately for Atlassian, by 2011 Bitbucket already lost out to Github and they hadn't realized it yet.
I think Atlassian lifted those restrictions for Sourcetree at some point. I remember having to sign up for Bitbucket, etc., at one point, but my last few installs (on different companies' work computers) haven't put up any such roadblocks lately.
I certainly struggled to grok git early on, but I feel this is largely because the documentation and explanations were convoluted. I think you can grasp it quickly by just understanding a few key concepts, namely: a commit is a set of changes to files, branches/tags are basically references to a specific commit, commits have an ancestor commit, remotes track distinct copies of the graph. Then you just need to know which commit you are interested in, how to navigate the graph of commits, and how to merge deviated sets of changes. If you understand those things, you know everything you’ll need to 90% of the time.
I always find this to be a weird statement, primarily because I think there are like 50 other things that are waaay more difficult when it comes to software development, but I hear people time and again say git has a big learning curve.
I mean, once I understood the major concepts of git, and understood how its model was fundamentally different from something like svn or cvs, it wasn't that hard to use it. Sure, it took time to learn, but given that it's a tool so integral to my working life, I'm fine with spending a little more time learning it if it makes me more productive in the long run.
Is there a VCS that doesn't have a steep learning curve? I've used subversion, mercurial, darcs, and bazaar to varying degrees. None were particularly easy to learn.
Right, but my point is you are STILL going to have to learn git anyway, since so many companies and open source projects use it. Your choice is not to learn git OR the other SCM, the choice is to learn just git OR git and the other SCM. You aren't saving any effort.
Could you be obsessed with effort reducing? If yes, you may want to look at it like an investment: you initially spend effort, time or money to learn something new only to get back more later.
I've used git for a decade and I still haven't learned half of it. It's got to be one of the most byzantine pieces of software I have ever encountered.
I've heard that a large part of the game industry still uses Perforce because Git still doesn't work with binary files that well. (The game industry doesn't really have much of an open dev community compared to other places, so that kinda explains why they don't feel the urge to switch to Git.) PlasticSCM has risen in recent years as a direct competitor, but Unity has bought the company and I'm not clear what the future for it would be.
If Fossil had good support for binary files I would have considered switching from Git (since Git LFS is just yuck), but from what I've heard it doesn't look too good.
It’s not binary files per se, it’s size. Checkout sizes are in the 50 GB and up range. Also, unmergeable files are common, so there’s a strong desire for exclusive checkout. This is why game teams either split source and data (awkward to impossible depending on the game), or use Perforce.
This is pretty much why I switched from Mercurial to Git, and in retrospect, it was absolutely the right decision. Not only did that choice relieve me of having to master two different version control systems, it meant I could build on top of the huge amount of Git tooling and (more importantly) training material out there.
Teaching colleagues and using tools they can put on their résumés is pretty important. Iconoclasm isn't always a good way to live (or do business).
Yeah, I was that guy. We used Mercurial for way too long.
I thought I'd learned my lesson, switched to Git, etc. Then I started rolling out SaltStack and turned myself back into that guy. I finally gave up when I realized I was the bottleneck on so many people's work. Went back to plain old written deployment guidance about three years ago. Now I'm looking at containerization and orchestration tooling and worrying if I'm making the same freaking mistake yet again.
It's such a difficult balancing act, especially as one moves into management. I could, by fiat, mandate tooling (and do). But will I regret my possibly poor choice in 5 years? I don't know—let's find out!
Feature for feature Fossil is more like a competitor to Github or gitlab. Basically it is a SCM based on sqlite that is a lightweight version of those websites offer.
To add some detail here. The features it offers (tickets, wiki, chat, forums, etc) are stored within the repo, unlike github or gitlab where those things may be separate.
Basically, if you have the repo and the fossil binary you have everything. Is this better? I never saw a lot of advantages in my style of work, but i could see this being huge for someone who has intermittent connectivity. Or maybe someone who wants to use the ticket tracking, etc without setting up a separate server or public instance.
I just love that fossil exists and there is a diversity of opinion on how things can work.
Fossil came out in a time when your options were basically CVS, SVN, or something commercial like Perforce. Git is an innovation that came out around 2005 (A year before Fossil), and it took a few years to get some traction, but holy moly are either of these better than what came before. It's just way easier to mess things up in the old school systems.
I think it's pretty interesting in that it's a hugely replete web application written in C that does all sorts of stuff... bug tracking, wiki, etc.
Is this import and export loseless (at least from the git side)? From the documentation I read last time I tried something like this, it wasn't and required workflow workarounds on both sides.
What we need is a version system, followed by git. In my personal opinion, if you only learn one, it is git. If you want to learn two, they are git and fossil. Fossi is more suitable for individual private projects than git. He cuts down some collaboration problems, and this is a very high-quality software. As for the comparison with git, I prefer that they are a software that optimizes different scenarios.
As for struggling with why there is such a judgment, you can try it yourself for up to one hour. I was shocked by the diff quality of fossil. All linguistic debates cannot replace the experience of the century.
Personally, I learned hg and then git, and I would 100% do that again if I were doing it over; my experience was that hg was much easier to learn, but that once I'd learned it git was pretty easy to pick up.
I wonder what the server feature gives you. I've been using fossil for a decade or so, always over ssh. Definitely don't need to do that setup part.
Fossil doesn't do rewriting history. I think that rules it out for large team efforts. As an immutable distributed log of everything I write by myself it's essentially perfect.
You can definitely fake it. It’s probably not worth it though. Fossil’s workflow might prove lovely without the need to rewrite, but if not just use a tool like git that allows it.
What I like about this question, however, is what it reveals about our assumptions, and what it says about what the words “commit” and “history” mean, and what it is we want out of a version control system.
The truth is that a commit is what I say it is, whether I’m using git or Fossil or anything else. This means that it’s absolutely always guaranteed to be subjective, and has only the meaning I give it. Preventing rewrites might have some workflow benefits, I’m quite open to the idea that there are advantages to the way Fossil does things. But the idea that rewrites should be prevented in principle because history should be preserved is really very funny to me. It’s only possible to preserve the history of the things I said are history in the first place.
You can definitely fake it if determined, it's sqlite under the hood. Your way would work but the branch will live forever (possibly closed) if you push it from the local repo.
Easier is to copy the repo, do a bunch of feature work there, then rsync over to the shared repo and commit. I don't bother though, local history has the mistakes and backtracking enshrined.
Private rebasing to clean up a series of commits before pushing anywhere, for one. While there’s a genuine distinction to be made between private pre-merge history and published post-merge history, Fossil’s stance is that no amount of pre-push cleanup should ever be allowed, that code commits should be immutable history from the moment of inception.
Right, you’ll definitely think twice about committing early and often if commits are suddenly etched in stone. Feels like the wrong incentive/message for a VCS. That said, I believe Fossil may be better equipped to explore a presentation layer that users have control over, and separately examine commit history when needed. Git wasn’t designed that way. I don’t agree with Fossil’s dogmatic stance on git, but Fossil might be workable in the sense that your history of noisy throwaway work can be mostly muted. I think?
I take it the smiley means you know this, but to be clear, that’s not the kind of rewrite we’re talking about. Fossil is against providing multiple/alternate sequences of commits that can represent the same timeline. Breaking the Merkle tree isn’t even in consideration (because a determined adversary can just as easily break Fossil). The discussion is really over workflows and perceptions, and has very little to do with the technicalities of preserving history.
Squashing literally requires `git rebase`. Perhaps you think of squashing as a GitHub button and not a series of rewriting commands? Technically, any rewrite is equivalent to some arbitrary construction of history from some point in time, but I think a reasonable definition of rewriting history is if you need to rebase or cherry-pick when using the command-line.
We don’t need a reasonable definition of rewriting history, we need a better conversation about what our tools are even for. Git isn’t a tool for preserving history, it’s a tool for keeping track of code changes.
Git rebase provides a new, second, altered sequence of commits. It doesn’t replace the original. We often choose to preserve the rebased sequence, but this distinction is not academic, it’s critically important because as a part of a version control system, rebase can be undone, precisely because it does not write over the old history.
Git never promised a “history” per se, not in the form of an immutable record of events. The framing of git rebase as a history rewrite, the very idea that rewriting history is bad, came from the Fossil team in an attempt to convince people to try Fossil and cast shade on git. I’m in favor of better VCSs, but the claim that rebase is a history rewrite is hyperbolic, and the judgement on top of that is silly and misguided. Rebase is a tool designed to reorder a sequence of commits, mostly for the purposes of making local changes presentable before pushing them, and has other legitimate uses too.
> rebase can be undone, precisely because it does not write over the old history
Good point. Perhaps "rewrite history" is not the best metaphor, and does not cover all possible git workflows. I only considered the workflows where a rebased branch is merged back onto master instead of its non-rebased variant, which loses its name, its remote copy/copies and is eventually hit by the GC.
> the very idea that rewriting history is bad, came from the Fossil team in an attempt to convince people to try Fossil and cast shade on git.
The resistance I've met among developers to adopt rebasing workflows with git have no knowledge of Fossil.
My impression is that the main objection to such workflows is that it exceeds some developers' novelty budget [0] and that those developers generally don't read git messages and don't value the history of code to understand its current shape.
Sorry maybe I got a little ranty there. Rewrite history is an okay metaphor, not great but nothing super wrong with it as an approximation. And you're totally right, there is definitely resistance to rebase outside of Fossil, and even within the git community with people who've never heard of Fossil. Git does have a steep learning curve and a bumpy interface that most people (including yours truly) never get all the way through.
FWIW, my intro to rebase came after using Perforce for years. Because git is strict about commit ordering and P4 is not, I was struggling to figure out how to manage multiple in-flight changes like I would in P4, and how to clean up and push something that wasn't a hot mess. Rebasing my local workspace, it turned out, is the answer to that. It is the key to working around git's strict commit parentage, and makes it easy to sculpt changes so they're palatable for me and my team.
Above I was referring to some of the anti-rebase messaging Fossil has put out in the past. Some is there still, for example https://www.fossil-scm.org/home/doc/trunk/www/rebaseharm.md I'm quite pleased to see that a lot of the most hyperbolic verbiage is gone now, it used to carry on quite a bit. But that page still says rebase is "dishonest", which is a deeply ironic claim, if you catch my drift. That kind of mis-representation ruffles my feathers a little bit, but I've had discussions with the Fossil team right here on HN about it, and they have in fact been open to listening, responsive, and willing to change and improve the messaging.
Technically, they are: you erased commits from your history. You created a single new commit with equivalent contents, but the timeline has been altered, and if you previously pushed to your remote you now have to force push because the remote remembers a version of history that no longer happened.
Fossil heavily discourages (outright forbids?) this kind of stuff.
Technically, commits are not erased after squash/rebase. They remain there and available to use unless they are not reachable from a named branch for like 90 days. Technically, the timeline is not altered, but the commit sequence is altered. Git does not offer an immutable timeline, and thus squash & rebase cannot break it. Fossil does try to discourage rewrites, but does offer ‘purge’ which is a rewrite, because of course, forbidding all rewrites is impractical.
I saw this was downvoted and tried to help by upvoting, but it is worth talking about what “history” means though. Squash commits rewrite the source branch history, but they don’t rewrite the destination branch history.
I would argue the team-wide destination branch history is what should be called “history”, and the only thing that matters. But Fossil’s distinction is more rigid and it considers anything ever committed to be history.
Squashing a local branch does, in a way, rewrite local commit history. But this shouldn’t be called “rewriting history” in my book. Rebasing the main branch really is fully rewriting history (and this isn’t a squash commit). Squashing a feature branch that multiple people touch is somewhere in between. It’s a little rewriting. It’s often desirable. Should squashing small feature branch PRs be called “rewriting history” in the sense that Fossil claims that it’s intended to misrepresent the project? I don’t really think so, but since “rewriting history” wasn’t clearly defined, it’s open for debate.
You don't even need to rewrite the source branch history. You can leave the source branch as is, and create a single commit on the main branch that is the squashed content of the source branch. It isn't considered a merge in the normal way by git, which can cause issues, but there is no lost history and GitHub and gitlab are able to keep track of the source of the squash commit.
Yeah true! It still goes to clarifying what people mean by the words “history” and “rewrite”, etc. The primary public history of a feature branch might only be preserved in the merge commit, so in some sense the preserved history is different than the branch history. It’s somewhat fair to call that rewritten. I prefer to focus on the fact the git never promised a “history” at all. Git doesn’t offer to capture what happened accurately, it only promised to keep the content of your changes and its relationship to the preceding state of the content. It only promised a sequence of dependent commits, and it allows reordering commits by design.
ahh, right, thanks for the responses. I was confusing not being able to rewrite history on remote branches with not being able to rewrite history at all. I rewrite it locally all the time but almost never remotely.
fossil has the concept of shunning an artifact, this has the interesting side effect that that artifact will now never be allowed in the repository(without additional work to un shun it).
"Purge" is really only intended for use with fossil's "bundle" feature, bundles being essentially feature-rich patches which record all of the historical state of the bundle. (Yes, it's long been flagged as experimental, but it's also been unchanged since it was added so i'll look into getting that notice removed or amended.)
The idea of fossil's bundles is that a person not associated with a project can submit a patch in the form of a bundle, a developer imports that bundle into their repo and either retains it or "purges" it, leaving the developer's repo clone back in a clean pre-bundle state. That feature can, however, and sometimes is, used for "popping" the top-most commit from a repository (so long as it has not yet been pushed to a remote). So long as a local repo has not been synced with any remotes, and so long as there are no branch points to interfere with it, repeated applications the bundle/purge combo can be used to pop the top-most checkin multiple times, effectively wiping out as-yet-unsynced changes even if they span multiple checkins.
I recently had a play with Fossil to see if I thought we might give it a try on a small project and honestly I found it a bit disappointing.
The biggest surprise was the web UI didn't seem to provide a way to review changed files in the working directory and perform a commit.
With Mercurial, before a commit I'll usually use Tortoisehg to review what I've changed, running diff views (with Meld) on my code changes, etc.
There didn't seem to be a non-command-line way to do this. I guess I expected a nice commit interface that would also easily let me add cross-references to related issues and such, given it is all integrated.
I'm both a Github user and a Fossil user. I use Fossil for all of my own projects, because the common operations have less friction and everything runs very fast locally. I'm including wiki, ticketing, etc. in the "common operations" even though I'm the only user of those things.
I would say if you do all of your development in a corporate or institutional setting where you are a contributor on a distributed team, Github is the best choice and the choice will often already have been made for you anyway. If on the other hand you do a lot of development on your own projects, it could be beneficial to spend a half-day and try out Fossil.
A corporate setting once required perforce so I did everything in a local fossil repo and begrudgingly created the changes in perforce to ship already completed things.
It's a feature to have it right there, a 'fossil ui' away from launching in a browser instead of whatever extra installed stuff git offers. Also choosing the right one from many takes more patience than I have.
Notably 'fossil' is a single static binary that can be anywhere on the $PATH too.
It sounds like clear text is still supported, but new passwords are written as a SHA-1 hash of the password concatenated with user+database-specific strings (since 2010):
Ouch. A single round of SHA-1 means you can easily make over a million attempts per second on a single general-purpose processor, considerably more on GPUs or specialised hardware. Ignoring any of the other weaknesses of SHA-1, this means that if someone gets read-only access to the database, anything but a long random password will fall very quickly (like, probably within a few seconds).
> Ouch. A single round of SHA-1 means you can easily make over a million attempts per second on a single general-purpose processor...
That's FUD: you cannot connect to a remote anywhere nearly that fast. Fossil's passwords are hashed in such a way that they are useless for any repository clone other than the one they were established on. Even if i manage to get your password hash, it doesn't do me any good unless i have physical access to the exact copy of the repository on which that password was initially stored/hashed, because only that copy has the secret key needed to reverse-engineer your password. Local physical access trumps any and all security measures.
As I said later in the comment: if someone gets read-only access to the database. Certainly you won’t be able to guess that quickly through Fossil’s web interface. If you could, this would be a catastrophe. As it is, it’s only very bad.
These sorts of things are all about defence in depth. Some have said: “What’s wrong with storing passwords in cleartext? No one else can access them anyway!” But then a bug in another part of your system allows exfiltration of the database. It’s that kind of thing. Weak password storage is also as much a problem for other systems as it is for one’s own system, due to password reuse.
I gather from a little more research that current CPUs can probably do over ten million SHA-1 hashes per second (some extrapolation of old figures there), and consumer-grade GPUs over twenty trillion. This will crack even moderately strong passwords in a very short time.
For comparison, Argon2 is designed as a proper password hash (resistant against acceleration) and its recommendation is to make passwords take a few hundred milliseconds to check (increasing the number of iterations as hardware gets faster). So maybe three per second, which basically renders any cracking whatsoever infeasible.
> The article is from 2016. I doubt passwords are still stored in plaintext.
Fossil has not stored a plaintext password in many years (2008-ish). Only a one of the small handful of "first-generation" repositories has any chance whatsoever of having such a password (and only if the user has never updated their password since then).
I like a lot of things about fossil, but it is very opinionated, which is fine if those opinions match your workflow, but if not, it's a problem. And it was designed primarily for small teams or individual developers, so I'm not sure how well it scales to larger projects and teams.
And then there are some other little things, like the absence of a good free hosting service similar to github or gitlab, the ignore system isn't as flexible as git, etc.
Would the wiki part of fossil be a good static site generator?
As in, if I wanted to publish a markdown lake with lots of interlinking with [[wikilinks]] would this work?
It might be faster than many SSGs and perhaps less complicated.
> Would the wiki part of fossil be a good static site generator?
Not in the general case, no. Fossil-generated wiki/markdown output expect to be browsed within the context of a fossil repository, and will generate links accordingly. There might be very limited use cases where it would work reasonably well, but not generically. e.g. adding a link to a local file or another wiki page will not work if the output is browsed from outside of a fossil repo instance.