This may just be cynicism, but since you assign point values to the work it seems like there would be a general downward trending as you get more into the project. I know at the start of any project I largely undervalue the difficulty of implementing a feature or fixing a bug.
Similarly, it seems that there is a large spike in point values right after showing the project to someone else. I know that, again, in my experience, after showing my projects off I tend to far over value the difficulty of new features. Typically that's because I end up spending 75% of the time leading directly to that showing off a project is to fix all the bugs and make it stable. After I've spent a long time fixing bugs, I start adding the difficulty of fixing the bugs that will come out of the feature to the difficulty of the implementing the feature. After a little while I stop doing that, and the difficulty would go down for the same feature.
Not saying anything about the post, or the quality of work done. I was just pointing out my observations on some of the statistical foibles.
General downward trending: That's been the case with us. If I went back and corrected the first few weeks of points, I would probably assign more points to the individual tasks, moving up the first few weeks of values. Yet I tend to correct the number of points assigned to each item before I mark it as "finished", thus somewhat normalizing the value of each point.
Large spike in point values: Nah, what actually happened is that we filed many more small (1-point) bugs, rather than overestimating values of new bugs and features. Thus, after showing off the new versions and handing out alphas, we'd have more work items at smaller valuations.
Did you assign points to bugs and chores, as well? Just curious, because Pivotal suggests not to count bugs towards the velocity of the project, because bugs are not directly bringing business value... Just curious about your approach.
I am confused as to what the graph represents. Just looking at it, it seems like it would be the total remaining story points. As the deadline gets closer, you finish more tasks. After the deadline, you add more stories.
The impression I get from the article is that the graph represents rate. So before the demos, they do less work, and after them, they do more? That just doesn't make sense to me.
Yes, after demos, we were typically more productive, because we'd receive a lot of feedback about what needs to improve, and we started implementing that.
Around Demo Day, we didn't get a lot of product stuff done because (a) we were rehearsing and (b) we had some meetings afterwards. But once that was over, productivity shot up again.
Similarly, it seems that there is a large spike in point values right after showing the project to someone else. I know that, again, in my experience, after showing my projects off I tend to far over value the difficulty of new features. Typically that's because I end up spending 75% of the time leading directly to that showing off a project is to fix all the bugs and make it stable. After I've spent a long time fixing bugs, I start adding the difficulty of fixing the bugs that will come out of the feature to the difficulty of the implementing the feature. After a little while I stop doing that, and the difficulty would go down for the same feature.
Not saying anything about the post, or the quality of work done. I was just pointing out my observations on some of the statistical foibles.