Hacker News new | past | comments | ask | show | jobs | submit | idoby's comments login

"In Judaism, on the other hand, Tuesday is considered a particularly lucky day, because in Bereshit (parashah), known in the Christian tradition as the first chapters of Genesis, the paragraph about this day contains the phrase "it was good" twice."

https://en.wikipedia.org/wiki/Tuesday


So he gets to take other people's money, invest it and take a cut without actually, like, having to do any actual work, but he's getting ignored at parties?

Damn it, where the hell do I sign up?



How is this a tangent, or flamebait? I'm just pointing out that he's literally whining about a job I'd love to have and not seeing how lucky he is.


It's a generic tangent because it changes the subject to VCs in general. It's flamebait because it's snarky, inflammatory, and arguably also a personal attack. Everything I said in the other comment applies to your post as well, which is why I linked to it.

Posts like that aren't acceptable on HN, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules from now on, we'd be grateful.


I find that internal motivation is mostly a myth, that is, the part where people expect to just have infinite drive out of nowhere.

The cooking analogy is good but here's an IMO better one - would you make a movie if you knew for certain nobody would ever watch it? I wouldn't.

You want to get a PhD - why? Is the PhD a mountain to climb or is it a pair of boots that will let you scale a mountain? Both answers are legit, but I think you do need to agree with yourself on one.

Motivation ex nihilo doesn't exist. Humans are goal-driven and averse to spending time on teleologically neutral things (enjoyment of the activity itself is, of course, a legit end on its own).


Internal motivation feels a little like fascination.

I do a lot of things that nobody will notice, that give me nothing, because there's an element of experimentation to it. To use the cooking analogy, it's like trying to replace butter in a recipe with margarine, making a french omelet as soft as possible, or pulling a perfect espresso.

I wouldn't say these things are enjoyable in themselves, but they scratch a curiosity.

One trick is to have different steps to try to follow. For example, I was very motivated to do a startup, because there were step by step instructions on what to do, and doing these steps consistently would result in becoming a billionaire.

It's an absurd idea and it didn't work, but it went much, much further than I expected, and it was fascinating to see how far it could go. Same with blackjack/poker - it's grindy and repetitive, but it's motivating to see whether the theory checks out.


Having an defined approach or steps to follow is a thing that actually demotivates me. I think it's why I don't cook a lot even though I can. I could follow defined steps and get an expected outcome but I'm just not motivated to do that.

I got disinterested in chess when knowing that all the best players have most moves memorized.


Then you were motivated by curiosity, which is great, but what if you weren't curious about the perfect espresso and knew that nobody would drink it. In this case, why make an espresso? You'd probably prefer to spend your time doing something else, right?


I think that many artists serve as a counterexample to this sentiment. Many of them, even ones who eventually become famous, pursue activities without a care of who pays attention and without an end goal. They simply explore and seek out expression, where it almost seems to coming pouring out of them by necessity, obsession, or some combination thereof.

Some artists are so prolific, it can be mind boggling. Artists are not given enough credit for their work ethic.

There are plenty of artists who would make something, e.g., a movie , even knowing someone wouldn't see it.


There are some artists like that, sure, but plenty? I doubt it...


Absolutely correct


Can't see why they shouldn't be allowed to raise funds by selling stock. People are free to buy or not to buy, at their discretion. Govt/courts shouldn't be picking winners and they shouldn't be picking losers either.


There are plenty of stocks you can by over-the-counter "freely".

Stock exchanges are regulated marketplaces. Regulated marketplaces have existed since the dawn of the civilization because they are valuable for buyers and sellers. City marketplace in a had different rules than trading outside it.

Companies go there and voluntarily submit to strict rules and regulations to get access to more investors. Investors wan to invest in regulated markets because regulators work for them.


> Investors wan to invest in regulated markets because regulators work for them.

no, that's not true. Regulators ensure transparency and correctness of information. It works to everybody's advantage, not just investors.

Investors are free to choose bad investments, provided that the investments are made with full and transparent information. Regulators aren't supposed to be there to "protect" investors from making bad choices (what is a bad choice? Who gets to decide that?).


I don't see you complaining about, for example, brokerages, dealers and exchanges providing liquidity by fulfilling your orders even though the market won't at a given point in time. This can prevent a security from falling or rising in value where it otherwise would have, and directly works against an investor who was hoping for a rise/fall in price.

Regulators don't work "for investors", they work for lobbyists, they want to keep the markets running, and for any given policy, someone will win and someone will lose.

I don't see how the Hertz decision is any different, except I do see how taking the opposite position can hurt the markets, either by creating a chilling effect on low-mktcap companies listing or by creating uncertainty in the legal environment, which investors really don't like.

Just look at the number of Israeli companies who won't list in their home turf, TASE, or companies who have and pulled out. A lot of it has to do with TASE imposing unreasonable requirements. Otherwise raising money domestically would be a no-brainer.


I’m an investor on the other side who is just minding my own business buying market indices and then all of the sudden, less-than-ethical actors show up and bid up the price. Now I am being effectively taxed by buying portions of a soon-to-be-worthless company. Who/what are we going to optimize this situation for?

In this case the NYSE jumped on delisting hertz stock to prevent this kind of stuff from happening. But courts and government have to step in constantly, and a tooooon of law created, because we live in an incredibly complex system and it’s almost never clear cut how to optimize for greatest freedom/happiness.


Less-than-ethical according to who?

If you're buying indices then you have explicitly given up control of a portion of your portfolio to them, and they can lose money as well as gain. Sounds to me like you're just complaining because you lost money because of this move, but symmetrically speaking, you could have gained.

And to be clear, I'm not long or short Hertz and have never been (unless some index or fund I'm holding happened to buy their stock).


Hertz lawyer, for one. And I will lose some nominal amount, or maybe I made money, depends on when the index rebalances. But NYSE racing to delist in the face of this move and Hertz own lawyers comments are enough that no, in objective reality, there’s many reasons why this isn’t just some simple issue.


I'm very curious to know this objective reality and how to discover it.


One is free to buy the index and to short the individual names that one does not want.

(One's compliance department permitting.)


Shorting isn't free.


In this case, I think they should be able to sell stock as well. But I think there is a point where selling a security you know to be worthless amounts to fraud.


You don't know it to be worthless. Hertz proved that, because their stock went up (hence not worthless).


"Bad" and "good" are meaningless judgements in this context. You're about to graduate, so you wouldn't be expected to have any real experience, but you do have experience building and running an actual product. Seems to me like you've got the time ahead of you to try running your startup and either succeed or fail, in which case you'd have enough time to restart your career as an employee.

My only advice would be: the founder mindset is very different to the employee mindset, and you'd have to take that into account, and, don't quit your CS degree if you're within 1 year of graduating.


At this rate, looks like I'm about to be pursuing a Replica of Science degree...


Not sure I consider "huh" to be an actual word. If so, I'm pretty sure blowing raspberry also exists in all human languages, and some non-human.


For all its flaws, Docker does solve a lot of the pain the author mentions. Deploying postgres, for example, with a config file and keeping it running is really very easy with Docker Compose.

Not affiliated with Docker except for using it, and while I do have some thoughts about design choices made by Docker, it's still a very good tool.


I don't see Docker as an alternative to a configuration manager. For one, turning to containers represents a distinct architectural choice, with a clear set of tradeoffs, that a developer may or may not want to introduce to a project.


It sure does, but even if you choose not to take advantage of container tech, it's still a very easy way to deploy stuff on a single machine. Not sure why the downvotes.


I didn't downvote. I see your point, but it's a bit of a digression, tbf.


I do use Docker, but that is a tool for completely different purpose. I need something else for configuration of all my physical machines; virtual server, physical server, several notebooks and so on.


I agree, but that's still a form of estimation though, you're just estimating that you'd be able to deliver on your list of features in the time you have. You could be wrong, and then end up with less features, or be forced to spend more time.


i don't think the claim was that we're no longer estimating. the author was attempting to articulate the process of estimation, where initially one promises the world in N amount of time. As N draws near you reduce feature / scope as needed.. and perhaps when the deadline hits and all you got is a int main(void) { printf("hello world"); return 0; } you figure out what you can make due with where you're at. i like this approach and it resonates with a lot of other great comments in this post.


TFA uses a lot of words to say very little.

I don't care if your estimate is drawn from the hip or projected using a state of the art Monte Carlo or machine learning model. It's still an estimate. Still, any number of things that weren't in your model could shift the deadline: people getting sick for a long time, people quitting, people getting promoted out of their critical role, organizational dependencies not delivering on time (other dev teams, legal etc), all of these things have happened to me, and when they do, they can throw off your project for months.

No model can take these things into account, and if it does, it will yield an estimate like "three weeks up to a year" which is useless, and I didn't need your SOTA model to get that answer. Unless you're really only doing cookie cutter stuff, the best form of estimate I've seen used is continuous estimation + being willing to cut features to make it to deadlines with something usable, even if incomplete (build a bike, not half a car). This isn't always possible, but when it is, it saves a lot of headache and makes everything run smoother. But it starts with accepting the fact that you don't know everything from the start.


The article is about forecasting not estimation. That's the point. Don't estimate. Measure.

It usually goes without saying that most forecasts do not include provisions for black swan events. It's generally assumed that going bankrupt or other project externalities will have an impact on the delivery.


Author of the article here.

I agree with this response. We are normally not asked to predict for situations where something big changes in the team. But I of course acknowledge that these things do happen. When you have a stable team, the numbers that this method yields are also very stable.


I agree with the top comment. The "method" is basically:

Instead you can look at the team’s historical data and apply statistical techniques.

Except that is already what every experienced developer is already doing, albeit in an intuitive way.

Intuition is superior here, because statistical models don't work for creative domains, and anyone who says so has something to sell.


My experience is that most project managers take a non-probabilistic approach.

Say you have your usual list of breakdown tasks and assign a time/budget estimate for each in terms of “low”, “most likely”, and “high”. The intuitive answer is to sum up the “most likely” for your total estimate. However, this ignores the probability that a delay in one task affects others.

Instead, if you take into account the covariance relationship between tasks (using historic or simulated data) you often find that “most likely” summation has a quite low probability of being met. For the org that applied this, there was a less than 20% chance we’d meet or best that intuitive estimate. No wonder we were chronically over budget and over schedule!


I've been reading the "Software Estimation: Demystifying the Black Art" from Steve McConnell.

He introduces a distinction that, at least for me, has been instrumental: estimations and plans are different things.

Estimations are honest, based on past performance data and probabilistic on their very natures.

Plans are, on the other hand, built with a target date in mind, taking into account the estimate previously made, desired delivery dates from customers and everything we are so used to.

By planing fulfillment of tasks closer to the estimates, you decrease the risk of the plan failing. You can build a shorter schedule and assume that staff will work overtime, assume more optimistic estimates and so on, but, then, the risk of failure will be higher. Such risk will, of course, never be zero though.

It's a simple distinction, but it has important implications. We don't feel anymore the pressure of making pessimistic, therefore dishonest estimates just out of fear of being pressed to cut the schedule. And also gave us a better argumentative tool to negotiate schedules with our clients.

I think it's also useful for making all the probabilities a bit clearer to project managers. It's like "OK, I know that you need me to commit with a delivery date, but I'm also going to make clear to you that there are some risks involved and I wanna make everybody aware of them"


That’s an important distinction. The way we handled it was by letting managers define their acceptable level of risk and then use the model to define the estimates in that context.

For example, if they were ok with a 60% chance of making or beating a cost estimate, the forecast could be much more aggressive than, say, a management expectation of 90% chance of being on budget


Thanks for sharing this. I think I'll experiment presenting the situation to a customer using such model as soon as I have a opportunity. Sounds good.


This might be helpful:

https://www.nasa.gov/pdf/741989main_Analytic%20Method%20for%...

It’s a straightforward enough primer that it can be done in Excel, including simulating the data if necessary.

Even if this type of model is too simple for actual estimation, it’s a useful (and sobering) tool to help managers understand why their intuitive estimates can so often be incorrect.


You can intuit how much “active time” it will take you, personally, to do something. How can you intuit how long a task is going to spend in a queue waiting to be worked on because your team doesn’t have capacity, or another team “down the chain” doesn’t have capacity?

We have queuing theory because people are bad at intuiting the latter, and I don’t even think we're anywhere close to good, as an industry, at intuiting the former.


You can talk about BS (in the context of software) like queuing theory or you can actually write software. I suggest the Mythical man month.

Sometimes I think humans developed language only to be able to pretend doing something:

Best hunter of the tribe kills a mammoth. But he is not verbally talented. Now an army of bureaucrats appear and tell everyone that they were instrumental in slaying the prey by applying some BS methodology. The tribe is gaslighted, the bureaucrats gain importance, influence and economic wealth.


Queuing theory is a branch of mathematics. It is useful, in a software context, for things like predicting server capacity and predicting response times of programs. It is also regularly used to predict things like hospital wait times.

Here is a very good introduction, I hope you can learn something new from it (:

https://github.com/ndvanforeest/queueing_book/blob/master/qu...


there's no actual use of queuing theory in the article though, it's just mentioned as some sort of irrelevant justification. it's not even a monte carlo simulation, it's a bootstrap. you definitely don't need queuing theory to run a bootstrap


If you build a general ledger application for the 10th time, sure forecasting is fairly straightforward. Nothing I do at my (very large, non tech but highly software driven) employer has ever been done before here. All estimates are treated officially as if they are date time accurate, but changes happen during the lifetime of the project so often you may as well use a random number. I call it a "nod nod wink wink" estimate: every wants it to be accurate, but no one really expects it to mean anything, other than the budget people.


One of my favorite managers required i give him estimates.

I hated it because we both knew the number was bullshit.

On the other hand, having to think about the estimate and give him something, even if at times it was a guess, i still found it beneficial. It meant i focused better, stayed on task, and often delivered on time anyway.

Im not saying everyone needs the accountability rails, but some people excell with this particular helper.


"Plans are worthless, but planning is everything."


Indeed!


I think that attitude is the right one. Make decisions as if you cannot predict the future and you will be on a better path than one where you think you can.

The way I look at it, you have the most unknowns and risk at the start of a project and the least at the end. As you work on it, you learn about the domain of your problem space and the possibilities and impossibilities of what you can do technically, as a result, those unknowns and risks go down. With that in mind, structure the work you do so as to front-load as much learning as possible to help reduce risk and get a better bearing on how long it will all take.

And yes, be able to cut features if that target date is more important than the ideal product goal you set out with.


I like and have been using Basecamp’s Shape Up process.

https://basecamp.com/shapeup


Came here to say this. If we can't forecast the economy at the scale of millions of people and trillions of data points you're not going to be able to forecast a company. We should focus on building organisations that are robust enough to withstand challenges that are thrown at them


We CAN forecast a lot of those things. The problem is that people either don't think in terms of cinfiy intervals at all, or they neglect the 2.5-5% on the tails.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: