Hacker News new | past | comments | ask | show | jobs | submit | ddkto's comments login

The short answer is no, it is much more than that!

The long answer involves several hundreds of years of history…


> Maple syrup is a minor luxury, easily substituted by other things like jam or honey.

Speaking as a Canadian, I suppose this is maybe technically true from an economic standpoint, but…we don’t even accept table syrup as a substitute, much less honey or jam.

(and I’ll bet IHOP serves table, not maple, syrup…)


As a New Englander, I would agree that jam is not a substitute for maple syrup, but it’s a valid choice if you’re feeling a fruity vibe.


It might be true for non-Canadian consumers of syrup. From my brief search, it looks like perhaps 70% of Canadian maple syrup is exported, so the tastes of non-Canadian consumers is quite important in this context.


Correct, except in Vermont.


As gwern said, maple syrup is a minor luxury. My household grew up with Aunt Jemima because the extra expense could not be justified. People here saying maple syrup cannot be substituted sound as pompous as saying they only drink XXO Cognac because regular brandy just won't do.


According to one random source online, a typical serving of syrup would be about 15mL per pancake, so perhaps 50mL for an average breakfast serving of three pancakes for an adult.

My local grocery store sells maple syrup for $1.90/100mL and table syrup for $0.60/100mL. This means that a serving of maple syrup would cost $0.95 and table syrup would cost $0.30, or a difference of $0.65 per person per meal.

As 'luxuries' go, we're not talking about large amounts of money here, even for a low income family, to afford. I grew up in a low income family, and we still used actual maple syrup growing up because the difference in quality is worth it.


An extra $0.50 per meal (less than what you are saying and not just for a topping) equates to an extra $500/year per person. Either you did not grow up as low income as you think you did or your family made sacrifices to keep maple syrup on the table. I knew many families that could not afford a new Xbox for their kids. You are literally saying that an Xbox per person per year is not a large amount of money for a low income family. I don't think we'll agree on that.


$500 per person per year divided by $0.50 per meal equals 1000 meals per year, which is almost every meal all year. Most people only eat pancakes for breakfast, and not anywhere near every day. What about a moderate rate like 2 maple syrup meals per week: then it's only $26 per person per year.


There's a lot more happiness from going to 1.50$/meal from 1$/meal at every meal across an entire year than buying an XBox at the end of the year and not being able to afford games. Poor people know what it's like to have little money for food, that first jump is often a very high priority.

Where exactly peoples breaking points are varies, but you can be quite poor and still have some wiggle room. 1.50/meal vs buying used clothes is easy etc.

So sure it might be real maple syrup on the cheapest pancakes it's possible to make, but it's well worth it.


In order to make that argument you have to assume that someone eats pancakes every day. I assure you we did not.


I mean, by your own numbers, maple syrup is more than 3 times the price of its competitor. I would also imagine the type of syrup is not high on the priority list when comparison shopping.

For what it's worth, my SO grew up in Texas and absolutely prefers the taste & consistency of table syrup over maple syrup for our use cases (pancakes & waffles). I suppose it's what he grew up with and that's what "syrup" is supposed to taste like to him.


My point is that the quantities that are consumed per serving are small and so it's the absolute costs that are more important than the relative difference.


Sure, but I think you have to consider how most people shop: if they see the bottle of maple syrup as 3 times the cost of its competitor, that's the comparison they see, especially if they have no strong emotional connection to maple syrup itself. Could they make the stretch if they wanted to? I'm sure they could, but I imagine there's a hundred other competing groceries that win out when the primary caretaker is grocery shopping. Should they opt for the name brand cereal or the store brand? Detergent? Juice? I don't think most of those grocery item comparisons are as stark as a three-fold difference, so I can appreciate that maple syrup may not make the cut for many people - even if the difference is not all that much in dollar terms.


Your analogy is flawed. Maple syrup is not to jam or honey as XXO Cognac is to regular brandy.

A better analogy is that maple syrup is to jam or honey as brandy is to beer.

Yeah — they have the same sort of fundamental purpose but if your recipe or drink calls for wine you probably don’t want to substitute beer or vice versa.


It can be somewhat substituted by “regular” (aka caramel) syrup although the taste of that is darker and has less depth due to the missing maple flavor.

Honey and especially jam are ridiculous examples for substitutes.


Simon Wardley would like a word…in his model this is the natural order of things. As technology matures and standardized and a new generation of tools is built on top of new abstractions, and the details of that tech no longer need to be understood in order to use it.

Subjects and skills that were requisite basics a generation* ago, become advanced, under the hood topics for specialists. The next generation of people need different skills in the day to day.

This post is a great account of what that feels like from the inside, from the perspective of the newer generation learning these (now) ‘advanced’ topics.

(Funnily enough, I don’t (yet) see anyone commenting "real men write assembler" - a skill that has long ago moved from required by all developers to super-specialized and not particularly useful to most people.)

*I am using the word generation in the broadest sense as it relates to cycles of technology


Whether or not this state of affairs is "natural", I do not think it is "good".

Civil engineers still need to understand calculus and how to analyze structural integrity even though they can rely on modern computer modeling to do the heavy lifting.

All engineers are expected to have some requisite level of knowledge and skill. Only in software do we accept engineers having the absolute bare minimum knowledge and skill to complete their specific job.

Not that we shouldn't use modern tools, but having a generation of developers unable to do anything outside their chosen layer of abstraction is a sad state of affairs.


> Only in software do we accept engineers having the absolute bare minimum knowledge and skill to complete their specific job.

You can require that your frontend engineer absolutely must have good assembly knowledge but you'll pay more for them and fall behind your competitors. You can require that your DBA knows how to centre text with CSS, but you'll pay more for them and fall behind your competitors. You can require that the people managing the data centre understand the internals of the transformer architecture or that the data scientists fine tuning it understand the power requirements and layout of the nodes and how that applies to the specific data centre, you'll just pay more for someone who understands both.

Everyone requires the bare minimum knowledge to accomplish their job that's pretty much the definition of "require" and "minimum", limited by your definition of someones job.

"software" is such a ludicrously broad topic that you may as well bemoan that the person who specifies the complex mix of your concrete doesn't understand how the HVAC system works because it's all "physical stuff".

> but having a generation of developers unable to do anything outside their chosen layer of abstraction is a sad state of affairs.

Whether it's sad depends if they're better in their narrower field, surely. It's great if we can have a system where the genius at mixing concrete to the required specs doesn't need to know the airflow requirements of the highrise because someone else does, compared to requiring a large group of people who all know everything.


Yeah, the flip side of there being 'less skilled' developers who operate at a higher level of abstraction is that it is easier to train more of them.

In absolute numbers, there are probably more people today who understand the fundamentals of computer hardware then there were 40 years ago, but it's a much smaller percentage of all the computing professionals.


> but having a generation of developers unable to do anything outside their chosen layer of abstraction is a sad state of affairs.

This is the normal state of affairs, and is really the only reason we can build meaningful software systems. Software is much too complicated, to understand even one layer of abstraction can be a multi decade journey. The important thing though, is that when the abstractions are leaky (which they always are), the leakiness follows a good learning curve. This is not true for cloud though.


Where do you draw the line? Should a civil engineer also be an expert on material science?

Likewise how much must a software engineer understand about how hardware works before they able to do a good job?

At some point in time there is diminishing returns for someone to be truly “full stack”.


> All engineers are expected to have some requisite level of knowledge and skill. Only in software do we accept engineers having the absolute bare minimum knowledge and skill to complete their specific job.

Most software engineers just produce websites and nothing that impacts the safety of other humans. Other types of engineers have to ensure people do not die.


> All engineers are expected to have some requisite level of knowledge and skill. Only in software do we accept engineers having the absolute bare minimum knowledge and skill to complete their specific job.

If that was true, then there would be opportunities for entry into professional software engineering careers. Because the only opportunities there are for software engineering jobs are opportunities for "senior" software engineers. Which entails much more than the absolute bare minimum knowledge and skill.

So there's some inconsistency going on within the mindset of people who measure competence and fitness in engineering, in the broadest sense of the concept of engineering.

Maybe engineering itself, then, isn't even remotely the noble profession it is widely believed to be? Maybe engineers and even scientists aren't that really intelligent? Or intelligent at all? Maybe science and mathematics should be abandoned in favor of more promising pursuits?


Engineering as applied to software is completely watered down in practice compared to Professional Engineering as implemented by many states.

If a software engineer "signs off" on software design, they have no personal or professional liability in the eyes of the law, or anywhere near the same expectations and professional/ethical oversight that comes with the territory of being a PE.

Until a "Software Engineer" can basically look a company in the face and deny a permit to implement or operate a particular stack/implementation, this will not change.

And yes, I am fully aware that this software engineer would basically become an "approver of valid automated business process implementations". This would also essentially be a social engineering exploitable position for implementing nepotistic dominion over a business jurisdiction. Hence why I'm not sure it is even a desirable path to go down.


> Until a "Software Engineer" can basically look a company in the face and deny a permit to implement or operate a particular stack/implementation, this will not change.

The possibility of a business not earning revenue or income as a result of its software development attempt is a form of software authorization that prefers "good" coding over "bad" coding. Whatever the global industrialist landscape decides is good and bad.

And, interestingly, earning income with software development is a much harder hazing ritual than the paths of traditional academia.


There are plenty of entry level software roles out there. They are often listed as senior and may not align with your particular definition of entry level, but there are definitely people that are getting those jobs who have limited prior professional experience.


> Not that we shouldn't use modern tools, but having a generation of developers unable to do anything outside their chosen layer of abstraction is a sad state of affairs.

Funnily enough my day job is writing software for structural engineers (and I am a licensed engineer). Your comments are absolutely on point. One of the most important discussions I have with senior engineers is "how will we train tomorrow’s engineers, now that the computer does so much work?"

40 years ago, the junior engineers were the calculators, using methods like moment distribution, portal frame, etc… today the computer does the calculation using the finite element method. Engineers coming straight out of school are plunged right into higher level work right away - the type of work that junior engineers a couple of generations ago might not have seen for 5-10 years.

My first career development discussion with a senior engineer was "Just work for 10-15 years, then you'll know what you need to be a good engineer."

I have discussed this under the theme of Generation Gap (https://www.youtube.com/watch?v=5gqz2AeqkaQ&t=147s, 2:27 - 8:58), and have a similar conclusion to you: what at first appears as a different generational approaches are actually different facets of a well-rounded, senior technical skill set. Maybe the kids are just learning things in a different order than we did?

Pat Gelsinger et al's discussion of the demise of the tall, thin designer is another interesting perspective (https://www.researchgate.net/profile/Avinoam-Kolodny/publica...)


Lots of HN commenters are younger generation folks, and lots of them have poor fundamentals. They will certainly deny the need for wider scope of knowledge, as they do not have it themselves.


While I mostly agree, I think one thing to keep in mind is that we still need people somewhere who know how to do that. e.g. FAANG might have data center people and sysadmins that know the hardware... we (they? not sure) just need to ensure that in the future, we still have _some_ people that posses that knowledge.

I do not think it is requisite that _all_ developers have that knowledge.


Yes, absolutely - skills move from mainstream to niche, but are still required! For example, a much smaller proportion of the population knows how to farm today than 100 years ago, but it's still important :)

(And sometimes these mainstream, practical, everyday skills stick around in funny ways: https://www.hillelwayne.com/post/linked-lists/)


It's a problem either way.

Innovation stalls, improvements take longer to materialize. Machines become obsolete but there's nothing to replace it yet.

Fewer people in fundamental roles is a risk, a danger to our economic chain. We could become quite vulnerable, even crash.


I disagree. I think it's about pivot time, not having a warmed up stable of skilled workers just in case. Nature never optimizes for that and it shouldn't. We should lazy-load that skillset if and when it's necessary. We have writing to carry knowledge forward. Also, video and other media. People are smart and I'm sure a large cohort could be assembled with the right amount of money in fairly short order. As long as that's cheaper than keeping a battalion ready just in case, then I'd argue it's the "correct" way to approach it.


What are you doing to help solve this problem?


"What can one do in the face of a relatively shrinking population?" is the more interesting question to me.

As someone whose managed a team before, there is a minimum population of people practically required to sustain a particular corpus of actionable information without suffering severe degradation in terms of said information's application.

Once one ends up below that point; things tend to go the way "from scratch rediscovery required", until such time as the population of people capable of acting on it is restored.

Whether that actually happens is a prioritization decision balancing against everything else that still has to be done.


Have three boys! All successful engineers. One even knows hardware and assembler!


It appears that the two controls are physically linked, but have a 'fuse' mechanism. When the two pilots apply lower forces to their controls, the forces are transferred between the controls and they 'fight' each other to move the sticks. After a certain threshold (50 lbs?), the two controls move independently.

In all cases, the flight computer works of the average of the two sticks. When they are in sync, this works great. In a situation like this one, where the pilots are pushing in opposite directions, the average will all of a sudden, be quite different from both controls.


Circular economics: The Phoenix bridge: Improving circularity of 3D-concrete-printed unreinforced masonry structures - https://block.arch.ethz.ch/brg/publications/1310

(Edit : link format)


> the project owner is always, in the end, responsible for the success of a project

This is very relevant to large government construction projects in the public-private partnership model (aka Alternative Financing and Procurement). These go best when the project sponsor (the gov’t) thinks clearly about what risks the private partner is best places to manage and transfers those risks to them (e.g. managing lots of construction sub trades), while retaining the risks that they are best placed to manage.

It becomes very expensive when the sponsor just tries to throw all the risk over the fence. As the author says, this gets expensive - either through change order or sometimes even in the upfront cost. If you pitch a risk to someone who is badly placed to manage it or who cannot quantify it upfront, they will cover their risk with a big fee.

You can shuffle the risk, but you can’t make it go away.


Professional business people are familiar with different business models, and when to apply them.

For instance, hourly billing makes a lot of sense if the scope is vague - the client carries the scope risk.

Fixed price is amazing when the client has a specific, measurable problem that they don’t know how to fix (but you do). You can solve it cheaply, get paid waaay more than hourly and have a happy client.

Being a professional means having multiple tools in your toolbox and knowing how and when to use them. Drafting contracts is all about deciding how the risk will be shared - you need the right risk-sharing model for each situation.

(edit: spelling)


On the flat rate model: I like to tell the story that the highest hourly rate I've ever earned to date was as a college student when I took on a fixed rate project to fix some department's "we have a web form that is critical to our work and backed by this perl cgi script that used to work but doesn't do anything anymore" problem.

Turned out someone had uploaded the perl script to a Linux server using dreamweaver on a Mac, and the line endings were wrong. So I ran dos2unix on it and added a blurb to some documentation on what not to do and how to fix it again if necessary.

Made $1000 for less than 15 minutes of work!

But of course I also could have spent two weeks bashing my head against an awful perl script, never figured it out, and made nothing. It's a risk/reward trade-off.


Game development adds another dimension. A musician I know helps small indie artists with music as a side business. His contract states a fixed number of hours producing the music and effects is free, until earning some 100k revenue / month, when it starts costing 0.1% of revenue royalties / month. He has fun with it, it's usually free, but if someone makes the new Minecraft with his music, he will get his share.


Good luck ever getting that share of revenue


Can't you just change the music in a silent update if the thing took off?


Sure you can. At 100k / month revenue, you have 100 currency / month or 1200 / year available for another artist, which may or may not include the risk of messing up the "feel of the SFX and music". Even at 10x that, that's a paltry sum, especially if identity is tied to it.


Honestly with it being such a small percentage you have to really hit it big to make such a dick move even worthwhile anyways


> Fixed price is amazing when the client has a specific, measurable problem that they don’t know how to fix (but you do). You can solve it cheaply, get paid waaay more than hourly and have a happy client.

What about fixed price makes the client happy in this situation? It's definitely not the fact that the contractor billed "waaay more than hourly".

The upside of fixed price is that it's more predictable and it aligns incentives. With hourly, the hidden incentive is to take as long as you can get away with because every additional hour you take is an additional hour you can bill.

With fixed price, the contractor has an incentive to finish earlier.

That's the part that I find many contractors miss: They treat fixed price as an opportunity to extract more money from the customer. As someone who has been on both sides of this (I've been a contractor and I've also hired a lot of contractors) I lose trust quickly when I spot contractors inflating fixed price bids because they think I won't be able to recognize what they're doing.


The client is happy because you solved their problem. Now they can take that solution and earn money on it, or reduce their ongoing costs, etc.

If your bid is more than the problem is costing them, they won't accept the bid.

If your business model is "inflating fixed price bids" rather than "solving problems for clients" then sure, you won't do very well in the long term.


They got their problem solved for an agreed-upon price. What's not to like?

It's actually the parent that was taking the risk. As they said, it could have easily been some hairy problem that took forever to solve.


Is it "inflating the bid" or "charging a premium for fixed cost".

You are welcome to haggle on price or not hire them. As a freelancer or bussiness owner, not every prospective bussiness deal goes through. That is not a failure of their model, just the realities of commerce.


What you mean to say here is that time & materials project structure makes sense when there's a lot of scope risk. Hourly billing never makes sense in our field, unless your bill rate is so high that all of your projects are denominated in small numbers of hours.


> when the client has a specific, measurable problem that they don’t know how to fix (but you do)

Then the client tells you that wasn't their problem in the first place.

In the 10 years of contracting I've never done fixed rate for the reason the client never knows what they exactly want. It's not like installing a tiled floor or drop ceiling.

Being a professional means a lot of things, but fixed rate contracting is amateur hour truly. Every software dev learns this eventually.


Which is why you have clear boxes on that fixed-rate, and stepping over those bounds means you're back into T&M billable. The legalese on these contracts needs to be quite tight, but it's a one-time expense to set that up, and it's best done properly (and updated as new edge cases show up).


Yeah I'm honestly amazed at how many software people don't get that the optimal strategy is cheap fixed price and incredibly high margin change requests.

That being said you need a good spec to do this, and that definitely isn't normally the case in software.


There is so much work out there that is cut & dried. It's not what you'd call software engineering stuff - more the bread & butter of consultants. But you really need to know the domain space to ensure you have those cutoffs (some of which might sound very limiting). About 3-4 times after you've deployed a specific solution for a specific vertical, you should be able to productize it (on 2nd and 3rd times through you're essentially spending extra effort to determine and test those boundaries for future implementations).

Often the goal of a cheap fixed price is to show how limiting that is so they know why you can't productize a fully customized solution.


In software, all problems are research and development problems. (If it wasn't, it would already be solved now.)


You would be amazed to know how many problems have been solved in software, but the client has no idea what the solution is or how to implement it if they did.


Property taxes are usually set as a fixed amount to be collected, and the tax % is back calculated. In practice, this means that the only thing that matters is the value of a property relative to the rest of the city.


Does the US use the same rates for all property types? At least in Canada (and in Sim City 2000…), the tax % is different for residential, commercial, industrial and often subcategorized further.


In my state it varies by town so nothing that covers the entire US.


That may be what “usually” happens but in california, which is a major commercial real estate market, we have this thing called proposition 13 which screws up everything.


It is interesting to consider this from a Stage-Gate vs Lean Startup perspective. These two product development processes can be seen as antithetical, but if you read the original sources (Winning at New Products and The Lean Startup), you find that they are both trying to achieve the exact same thing: delivering successful new products to market at minimal cost.

The differences between the two processes is a function of their environment. If you are working in a company with separate functional departments and a strong, existing brand, you need a step by step process to align everyone and get customer insight before launch. Stage-Gate answers this need and products that get killed die before they are formally launched (thus protecting the brand).

If you are a small team doing everything with an unknown brand, the most reliable way to get market feedback earlier is to just put something on the market. The Lean Startup comes from this perspective - if there isn’t enough traction, kill the product and invent a new brand.

(Of course, these processes can be adapted to other environments, but these are their native soil, so to speak.)

Google seems to be managing its product initiative like startups: they incentivize new product launches, and don’t hesitate to kill products that are already on the market. Perhaps we are better off adjust our expectation of the google brand: it’s just a VC brand (like a16z), not a product brand.


I think that is a reasonably analogy, but an important difference is that a startup has a strong incentive to be committed to their product. A startup may not stay in business for long, but we have an expectation that they'll support their product for as long as they can. After all, this product is a big bet for them and they really need it to succeed.

I don't see that those incentives even exist at Google. A well managed company would introduce new products that match the rest of the company's portfolio. When a product fails to gain interest, they would revise it and try to figure out how to provide customer value. There's nothing about Google's behavior in the past ten years that suggests that's what is going on there. From an outsider's perspective, there's nothing to indicate that Google is making any effort to follow through to make sure new products are successful.


exactly - Google is acting like a VC, not a company. A VC isn't too worried if a large percentage of their investments go under, so long a few make it big.

It's like Google has the worst of both worlds: the upper management has the detachment of a VC firm, and the product teams have the detachment of being incentivized to start something new. At least in an actual startup, the team is committed to success.


As a Quebecer, all I can say is, join the club! For everyone shocked by this, it is much less extensive and onerous than the Quebec languages laws.

Very roughly speaking, similar rules apply to all companies operating in Quebec with 25 or more employees, not just public officials.


There are also laws/government initiatives about the use of Welsh in Wales, so the theory that it's some sort of neo-fascism doesn't fit well there (Wales is run by Labour).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: