I worked with someone whose approach was very interesting: he committed the .git directory of a newly initialized repo to a separate, newly initialized repo. And then watched what changed when he added a file, changed things, branched, etc, in the committed .git directory.
It's always seemed worthwhile to me to dig into git's model more, but if you're already comfortable and productive with stuff like detached head/rebasing/basic workflows, it's hard to justify when you're already trying to find time to learn new languages, frameworks and devops tools ...
but I've finally found some time to start digging into how to really use it.
The issue I have with it is that if you step outside the basics it's so easy to get yourself into a thorn bush and the way that git is explained most places is really not intuitive at all.
Try using a tool like 'git cola' that allows you to selectively commit soecific lines, instead of just by file.
Start a project and attempt to maintain a clean history. Break up changes logically into separate commits. To the point where 90%+ don't need more than the subject to describe a change and the history can be read like a story.
Use 'gitk --all' to view the tree of changes. get comfortable with using feature branches, using interactive rebasing to clean up a messy history of changes, etc.
The CLI works great for most things but visual tools make it much easier to write a clean history and reason about the changes over time.
Adding on to your helpful comment, you can also get a more concise, gitk-like graph of commits with `git log --graph`. This is my standard alias for viewing history, which also adds colors and tag/branch names:
I was excited when I first heard about git add iteractive mode but I find the interface to be quite unintuitive and idiosyncratic. I just stick to using "git add -p" to stage hunks selectively in a much more straightforward manner.
Having said that, I stage all my commits on command line, and generally go to a GUI rarely for certain visualization tasks.
* Using stash to store stuff when I want to pull a remote in that will overwrite things I'm not ready to commit
* git add -p, git add -i are nicer ways to add files
* git grep
* git reset, revert, and checking out old commits
- these commands I currently find tough to get right
- this is mostly because I don't really get the HEAD~2 ^ and what the syntax is for accessing older stuff
* git fetch and merge -i instead of pull. I got burned by using pull a few times.
Most of this stuff is stuff that I've known about since I started using git but I was afraid to use it because I didn't really know how it worked and didn't want to "mess up". Since the previous comment describes 90% of what I need to do, there wasn't really any point to doing it any differently.
The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off. This leaves the obnoxious necessity to commit for just syncing things instead of for actually finishing a feature. I don't want to do rebasing because I don't want to lose history. This is the main reason I don't think git is currently an ideal solution for me but since I have nothing better I'm stuck with it.
It needs a simple semantic interface and it needs the ability to "sync" in-between commits.
on the desktop. of course, this "rewrites history", but only in a very localized way.
In general, you're going to be fighting against git if you take an absolutist stance against rewriting history. Which is fine! But a little bit of controlled rewriting can open up a lot of options.
Edit: and I'm typing this from memory on my phone so please don't copy and paste the commands without verifying that they work correctly first!
> The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off.
Could you use something like rsync or unison to sync the working directory (including the .git directory) between your desktop and laptop? I'm new to git myself, but after reading through the OP article I imagine this would work.
Yeah I've thought about rsync. It just seems like a half solution and I'm not really sure how well it would work when I'm off my home network. Sometimes I ssh into the desktop because my laptop is old and run into limitations with front end build tools.
a couple things helped me get into a comfortable flow w/ git:
realizing git stash creates a commit (accessible via
git reflog show stash).
v helpful for managing interrupts, and gaining confidence you're not going to lose any work.
also, learning to be quick to create (and dispose of) branches, as they're just names.
IIRC, standard git clients don't show you the commit sha with 'git stash list', hence the extra few chars (easily skipped w an alias) are worthwhile. shrug
> It is a great way to have a clean linear history.
Why is this considered by so many people to be a Good Thing? Engineering is an inherently messy human process, and the repository history should reflect what actually happened. To that end, I've been advocating a merge-based workflow instead:
- The fundamental unit of code review is a branch.
- Review feedback is incorporated as additional commits to the branch under review.
- The verb used to commit to the trunk or other release series is 'merge --no-ff'.
Under that model, merges are very common, particularly merges from the trunk to the feature being developed. But that's OK, because its what actually happened. When most people perform a 'rebase', they are actually performing a merge, while dropping the metadata for that merge.
Before reading more about rebasing, I wouldn't have an opinion here, but like most things in programming I think it's a matter of philosophy.
Do we want the history to be "record of what actually happened" or "story of how your project was made." [0]
I see merits in both approaches: Rebase seems to be good when you want to focus on the project minus the process, while merging seems to be good when you want to know the process behind the project. For larger projects with multiple contributors, I think the merging approach is better because of the process visibility. For smaller projects with one or two developers, a rebase approach could be "cleaner" when looking through the logs later on.
I'm interested to hear what other's opinion on the topic as well.
It is an interessting analysis. I think you're right saying that its a matter of philosophy after all.
In my experience, the clean linear history can be important when you build a product which is going to be certified since the developement process is key to obtain the certification.
Also, I like the fact you can always reorganize your commits before rebasing, making them more atomic / cleaner.
I once had a filesystem watcher watching a git repository. It does not give you detailed information about what changed in the files, but shows in real time what files are changing while you do your regular work.
It's always seemed worthwhile to me to dig into git's model more, but if you're already comfortable and productive with stuff like detached head/rebasing/basic workflows, it's hard to justify when you're already trying to find time to learn new languages, frameworks and devops tools ...