Hacker News new | past | comments | ask | show | jobs | submit login

I worked with someone whose approach was very interesting: he committed the .git directory of a newly initialized repo to a separate, newly initialized repo. And then watched what changed when he added a file, changed things, branched, etc, in the committed .git directory.

It's always seemed worthwhile to me to dig into git's model more, but if you're already comfortable and productive with stuff like detached head/rebasing/basic workflows, it's hard to justify when you're already trying to find time to learn new languages, frameworks and devops tools ...




for about six months my git workflow was this:

git add -A

git commit -am "fixed some stuff"

but I've finally found some time to start digging into how to really use it.

The issue I have with it is that if you step outside the basics it's so easy to get yourself into a thorn bush and the way that git is explained most places is really not intuitive at all.


Try using a tool like 'git cola' that allows you to selectively commit soecific lines, instead of just by file.

Start a project and attempt to maintain a clean history. Break up changes logically into separate commits. To the point where 90%+ don't need more than the subject to describe a change and the history can be read like a story.

Use 'gitk --all' to view the tree of changes. get comfortable with using feature branches, using interactive rebasing to clean up a messy history of changes, etc.

The CLI works great for most things but visual tools make it much easier to write a clean history and reason about the changes over time.


You can also commit specific lines using `git add -p`


Adding on to your helpful comment, you can also get a more concise, gitk-like graph of commits with `git log --graph`. This is my standard alias for viewing history, which also adds colors and tag/branch names:

    lg = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %C(magenta)(%cn)%Creset %Cgreen(%cr)%Creset' --abbrev-commit --date=relative --left-right


It's much easier when you can see all of the staged/unstaged files and drill down to staging/unstaging individual lines within those files.

It also highlights whitespace nastiness (ie trailing spaces, missing newline at end of file, inconsistent newline chars, etc).

The CLI is the ideal too for a lot of things. Preparing commits is not one of them.


You can see all the files and drill down with "git add -i". Git also highlights whitespace problems.


I was excited when I first heard about git add iteractive mode but I find the interface to be quite unintuitive and idiosyncratic. I just stick to using "git add -p" to stage hunks selectively in a much more straightforward manner.

Having said that, I stage all my commits on command line, and generally go to a GUI rarely for certain visualization tasks.


I use GitX (forked/updated version) so I can stage/commit line-by-line and see a nice branch/merge visualization: https://rowanj.github.io/gitx/


Ah yes, the "subversion" method of using git.

...

I also do this. :|


Also the method that gets logging, debuggers and temp files committed by accident.


Or authentication tokens. How many ssh keys or database passwords have been lost like this?


Been there. You stop doing it when you start collaborating with others. It just doesn't work then. ;-)


And I don't think it's an issue if you are using continuous delivery.


This is still my git workflow. Aside from the off times I have to rebase or revert a commit.

I'm curious as what git commands you've found the most valuable or you've used the most since digging deeper into git.


* Using stash to store stuff when I want to pull a remote in that will overwrite things I'm not ready to commit

* git add -p, git add -i are nicer ways to add files

* git grep

* git reset, revert, and checking out old commits

     - these commands I currently find tough to get right

     - this is mostly because I don't really get the HEAD~2 ^ and what the syntax is for accessing older stuff
* git fetch and merge -i instead of pull. I got burned by using pull a few times.

Most of this stuff is stuff that I've known about since I started using git but I was afraid to use it because I didn't really know how it worked and didn't want to "mess up". Since the previous comment describes 90% of what I need to do, there wasn't really any point to doing it any differently.

The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off. This leaves the obnoxious necessity to commit for just syncing things instead of for actually finishing a feature. I don't want to do rebasing because I don't want to lose history. This is the main reason I don't think git is currently an ideal solution for me but since I have nothing better I'm stuck with it.

It needs a simple semantic interface and it needs the ability to "sync" in-between commits.


Try

    git checkout -b wip-syncing
    git commit -m "wip means work in progress"
    git push <whatevs> wip-syncing
on your laptop, then

    git fetch --all
    git checkout <whatevs>/wip-syncing -- .
    git push <whatevs> :wip-syncing
on the desktop. of course, this "rewrites history", but only in a very localized way.

In general, you're going to be fighting against git if you take an absolutist stance against rewriting history. Which is fine! But a little bit of controlled rewriting can open up a lot of options.

Edit: and I'm typing this from memory on my phone so please don't copy and paste the commands without verifying that they work correctly first!


not a bad idea to keep a separate branch for doing that. I might try that out.


> The biggest problem I have with git that I have yet to solve is that I will be working on something on my laptop and then want to switch to my desktop and pick up where I left off.

Could you use something like rsync or unison to sync the working directory (including the .git directory) between your desktop and laptop? I'm new to git myself, but after reading through the OP article I imagine this would work.


Yeah I've thought about rsync. It just seems like a half solution and I'm not really sure how well it would work when I'm off my home network. Sometimes I ssh into the desktop because my laptop is old and run into limitations with front end build tools.


a couple things helped me get into a comfortable flow w/ git: realizing git stash creates a commit (accessible via git reflog show stash). v helpful for managing interrupts, and gaining confidence you're not going to lose any work.

also, learning to be quick to create (and dispose of) branches, as they're just names.


> accessible via git reflog show stash

You can just do "git stash list".


IIRC, standard git clients don't show you the commit sha with 'git stash list', hence the extra few chars (easily skipped w an alias) are worthwhile. shrug


For some time now, we use the rebase workflow. (create your branch, do some work, rebase on master, push).

It is a great way to have a clean linear history.

But it makes git pull 'illegal' because it does a merge implicitely.

That's tipically something I didn't think about the first times I used git.


> It is a great way to have a clean linear history.

Why is this considered by so many people to be a Good Thing? Engineering is an inherently messy human process, and the repository history should reflect what actually happened. To that end, I've been advocating a merge-based workflow instead:

- The fundamental unit of code review is a branch. - Review feedback is incorporated as additional commits to the branch under review. - The verb used to commit to the trunk or other release series is 'merge --no-ff'.

Under that model, merges are very common, particularly merges from the trunk to the feature being developed. But that's OK, because its what actually happened. When most people perform a 'rebase', they are actually performing a merge, while dropping the metadata for that merge.


Before reading more about rebasing, I wouldn't have an opinion here, but like most things in programming I think it's a matter of philosophy. Do we want the history to be "record of what actually happened" or "story of how your project was made." [0]

I see merits in both approaches: Rebase seems to be good when you want to focus on the project minus the process, while merging seems to be good when you want to know the process behind the project. For larger projects with multiple contributors, I think the merging approach is better because of the process visibility. For smaller projects with one or two developers, a rebase approach could be "cleaner" when looking through the logs later on.

I'm interested to hear what other's opinion on the topic as well.

[0] - https://git-scm.com/book/en/v2/Git-Branching-Rebasing#Rebase...


It is an interessting analysis. I think you're right saying that its a matter of philosophy after all.

In my experience, the clean linear history can be important when you build a product which is going to be certified since the developement process is key to obtain the certification.

Also, I like the fact you can always reorganize your commits before rebasing, making them more atomic / cleaner.


    git pull --rebase
doesn't merge implicitly and

    git config --global pull.rebase true
will set that as the default `pull` behavior.


Note that

  git config --global pull.rebase true
was added in v1.7.9 - if you're using an earlier version of Git for whatever reason, the config you should be setting is

  git config --global branch.autosetuprebase always


Didn't know that. Thanks


I once had a filesystem watcher watching a git repository. It does not give you detailed information about what changed in the files, but shows in real time what files are changing while you do your regular work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: