How does that work out? Looking at the per-hour figures in AWS and GCE, they're similar for a 64-core, quarter terabyte RAM machine ($3.20 AWS, $3.40 GCE). Both vendors have ways to get your bill discounted (with google's being easier). AWS's instance pricing has been fairly comparable to other vendors for a couple of years now; it's their bandwidth that's costly.
What are most people running anyway? Using this kind of behemoth server as a 'typical' example seems out of line.
Google base is slightly lower and it gives you 30% off automatically on instances that run 24/7 for the month.
That's typical of on premise deployments. You get big servers because they are so annoying to deploy you might as well fill everything possible in the box.
On the clouds, you would create a group of VM per service, with appropriate size for the service. It is significantly more efficient.
Isn't that $1710 already including the 30% discount? (It works out to $3.4 x 30ish days x 70%). You get similar discounts on AWS, but you have to commit for a year - given the OP's intent of leasing a Dell server for three years, this shouldn't be dismissed.
It's no secret that AWS is a rip off on instance pricing. Use the competitors if you worry so much about pricing.