Your assuming you can do work without modifying goals. I have preferences, but my goals change based on new information. Suppose bob won the lottery and ignored that to work 80 hours a week to get a promotion to shift manager at work untill the prize expired. Is that intelegent behavior?
Try and name some of your terminal goals. Continuing to live seems like a great one, except there are many situations where people will chose to die and you can't list them all ahead of time.
At best you end up with something like maximizing your personal utility function. But, defacto your utility function changes over time, so it's at best a goal in name only. Which means it's not actually a fixed goal.
Edit: from the page It is not known whether humans have terminal values that are clearly distinct from another set of instrumental values.
That's true. Many behaviors (including human behaviors) are better understood outside of the context of goals [1].
But I don't think that affects whether it makes sense to modify your terminal goals (to the extent that you have them). It affects whether or not it makes sense to describe us in terms of terminal goals. With an AI we can get a much better approximation of terminal goals, and I'd be really surprised if we wanted it to toy around with those.