Hacker News new | past | comments | ask | show | jobs | submit login

I use Mercurial, but have been exploring Git a little because of what you noted -- the community momentum seems to be headed that way.



Another vote for Mercurial. Whenever a project gets that "KoolAid" feel (Git, Rails, even Haskell at times) I tend to gravitate towards its competitors, so take that with a grain of salt, but I can say that Mercurial's been great for the small teams I've used it with. On larger projects, I've unfortunately only used Subversion, but I can say that it works, at least.


I'll third the recommendation for Mercurial. There is no question that git has more community momentum--something which I hope will begin to change--but Mercurial is nevertheless an outstanding distributed version control system.

Mercurial's default mode of operation has the benefit of being extremely Subversion-like--in a good way. Indeed, if you never interact with anyone else, most of the normal Subversion commands--ci, mv, rm, add, log--"just work." There is no git-style index to worry about, no rebasing, and the command set is small and regular. That means that the learning curve is far, far shallower than git. Yet it still provides the same distributed branching and merging as git, and, though it is slightly slower, the difference is truly negligible in my experience--maybe a couple percent difference at most.

What about the incredible power of git, though? What if you actually want to be rebasing and rewriting your history all the time? When you want unfettered power in Mercurial, you've still got it. The `mq` extension, which can be enabled by adding a single line to your configuration file, allows you to do all the crazy patch rewriting, merging, splitting, and rebasing that git does.[1] But you can ignore that functionality if you want, and still have a very powerful and fast distributed change system.

When Fog Creek looked at going to a distributed source control system last year, I advocated Mercurial over the competitors. Though the transition wasn't seamless, I've been extremely happy with the result. The Unix experience is extremely pleasant, and TortoiseHg (http://tortoisehg.sourceforge.net/) provides a surprisingly solid Windows experience out-of-the-box.

If you can look past the fanboyism, I'd strongly encourage you to give Mercurial a try. I think it strikes a much better power-vs.-usability balance than git does.

[1] Except microbranching. Mercurial doesn't currently support local named branches. You can achieve similar things by using mq with qguards, if you really need them, but in practice, I find it's usually easier to just clone a second repository. In practice, although I do miss microbranches sometimes, I've found I greatly prefer the streamlined workflow of Mercurial.


Oh, right, I forgot. TortioseHg is a clincher if you want distributed VCS for Windows users. It makes things very easy.


The momentum here seems to be pretty heavily for Mercurial... maybe I should consider git to support the underdog ;)

All kidding aside, Hg might be what we need. I'm looking into it now. Is it easy to import p4 history into Hg?


Don't take our advice like that! Try it first: You may hate the way that one VCS works compared to another. Technical merits have less to do with this choice than your own preferences.


Hehe, yeah I'm trying to try it.

Unfortunately it doesn't like any of the large files (anything over 10megs), and the windows shell plugin causes explorer to run like a dog.

I've done some research on it and I want to like Hg. any advice on having it play nice with large files?


Maybe this is too far outside of the question, but particularly if you aren't going to be merging the large files, have you considered mirroring them with rsync (http://www.samba.org/rsync/) instead? I'm not sure tracking really large binary files is best handled by a VCS. I'm trying to read between the lines in your question, but would e.g. periodically making dated snapshots of the binaries and otherwise automatically mirroring around the newest version suffice?

The Mercurial page on binary files (http://www.selenic.com/mercurial/wiki/index.cgi/BinaryFiles) doesn't say anything about especially large ones.

Also, has anybody had good experience importing from p4 to hg on Windows? I've tried using tailor and some scripts from the mercurial wiki, but no success yet. One of these days I might write my own importer script (mostly because I need to import from five or six major branches, about 50k commits), but haven't had the time yet. (I'm working on Windows for similar reasons. Mercurial has been great for typical VC usage.)


Our import from p4 doesn't have to work in windows. We're comfortable in Linux, we just can't develop in it.

You are correct, the binary files will not be merged. Some of them are large encrypted databases and can't be merged. The database is built based on source that may be merged, but the merge will happen in the source, and when the source merge is complete we'd rebuild the databases and check in the result. The largest number of binary files are compiled programs and very rarely change.

You're suggesting something a lot of other people in this topic have suggested. I would love to implement some kind of binary file management system; it just isn't going to happen. These binary files don't need to be merged, but they will be changing semi-frequently and are likely to be different between branches. We don't have the time and manpower to implement a system that would work for us. Either the VCS needs to handle these files or we can't use it.

I think I found a config setting deep in the dark heart of git that can make a repository friendly to large binary files. I'll check it out, and if it works we could create a set of repositories for these binary files. I'll try this and see how it goes. ( http://www.gelato.unsw.edu.au/archives/git/0607/24058.html )

I'm really trying to find a way to use Hg, but if it can't handle our use-case we can't use it. That doesn't mean it's a bad VCS. I really like some of its features and the fact that it is far simpler than git. I also really like that it has file explorer integration. It just needs to handle projects with an obnoxiously large code base and large binary files.


Did you look at rsync? You don't really need to implement anything.

My point, though, was that in some sense you're trying to find a way to bend a VCS into doing something well outside the strengths of VCSs, so it is worth looking into categories of tools better suited to the problem.


Yes, I'm familiar with rsync. We've talked with Perforce Support a fair amount regarding large files, and they remind us that we are abusing their system. It's like using Harley Davidsons on a worksite to move around loads of dirt. It may work, but it's not the intended use.

We might be able to get that to work for some of the binary files realativly easily. The problem is that the majority of these files are build artifacts for our test programs and the builds happen in the same directory as the source (yes, this is a stupid way to do things, but we need to do what customers do, and our customers do this). It would be very hard to distinguish between build sources and build artifacts. This leaves the issue of branches. Each branch would have to 'know' which of the binaries to grab in the shared space, and update other branches at integration time.

This could be a valid way of doing things, but it would take a fair amount of effort because our environment is like the real world; dirty and complicated. For the past couple of years it's been on the backlog to clean and simplify our environment, but something more important always comes up.


Good analogy.

How do you distinguish between the build source and artifacts now? It's not difficult to specify (via filename regexes) whath should and shouldn't be examined by Mercurial as potential VC files, beyond whether or not you explicitly add them: look at the .hgignore file. (Not sure that's directly helpful, but for the archives.)


We don't distinguish between them. The test programs were built and all the resulting files were checked in. Unfortunately some of the build artifacts share extensions and directories with build source, and some of the build artifacts have no extension. The only way we could add the build artifacts to .hgignore (or .gitignore) would be to manually add them one at a time, and that would be a huge task.


You can also add * to the ignore file and explicitly specify what to track, either by hand or by "hg add [fname]" in some sort of script.

I track/sync my home directory with mercurial, and did that to keep it from scanning most of my drive for updates. (You can probably do the same with git.)

(For the archives as much as you, though I hope it's useful.)


Unfortunately, that's probably relatively complicated. The internal Mercurial convert utility doesn't speak Perforce, and while tailor (http://progetti.arstecnica.it/tailor) does, I've had mixed results with it. You can always give it a show. The worst case is simply that it doesn't work.


I tend to gravitate towards its competitor

Me too - which is one paths that led me to Bazaar. Like your experience with Mercurial, Bazaar has done the trick on the smallish (<10) teams I've used it with, and the bzr-svn plug-in means you can still go out to play with your SVN using friends...


It's not like Mercurial is all that unpopular. Sun has switched its open source projects to hg. Mozilla is using it as well. All the subversion committers I know prefer hg over git as well.


True popularity is different from hype. After all, Perl has always been more popular than Ruby.


I'm a big mercurial fan. After using it, I hate using SVN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: