Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Let maintainers be maintainers (graydon2.dreamwidth.org)
187 points by chalst on Aug 26, 2023 | hide | past | favorite | 58 comments


Excellent thoughts by Graydon here.

One concern I have is that, every time he talks about the maintenance roles, I'm automatically thinking it's unglamorous, and marking a career plateau (perhaps decline).

I also instantly visualized a suited executive reluctantly deciding it's a necessary role they have to staff, and the exec will feed the trolls in the mine, but the focus (and rewards) will consciously be on the stars who are making new things happen.

Even though Graydon just explained that kind of thinking is a problem, I'm still thinking it.

If I'm still thinking that (with background that includes very serious software engineering, as well as FOSS), then my guess is that a lot of other people will be thinking that, as well.

BTW, I'm not saying that I'd personally devalue maintenance roles. If I was managing something important that needed to be maintained, and I lucked upon a stellar maintainer, I'd do everything I could to retain them and keep them happy, including making a case for why their comp should track with some of the new-product-star people.

I'd also try to make sure that, if their position declines/disappears (e.g., they no longer have someone advocating successfully for the role) that they'll be marketable elsewhere. (I don't want them ever walking into an interview and hearing, "I see from your resume that you're more of a maintenance programmer, but we really need people who can hit the ground running, banging out new huge kernel modules. Maybe you could assist them, by writing unit tests, and fetching their coffee, so they can focus on the challenging new stuff?")

One sign of hope is that the best-known worst-offender, at rewarding only new things, at least takes some aspects of reliability seriously.


"I see from your resume that you're more of a maintenance programmer, but we really need people who can hit the ground running, banging out new huge kernel modules. Maybe you could assist them, by writing unit tests, and fetching their coffee, so they can focus on the challenging new stuff?"

Is "new stuff" easier work than maintaining old stuff? I always thought it was the other way around. I always give greenfield stuff to the juniors. They fuck it up I fix it. It's basically like using chatgpt.


I agree. Maintenance work is hard. Writing new code from scratch is easy.

At work we put the smartest most experienced people on maintenance work. Because it is the lifeblood and cash flow of the company. It directly pays the salaries of people working on new software that might never make a profit!


> I'm automatically thinking it's unglamorous, and marking a career plateau (perhaps decline).

This may be a bug in your thinking, although the lack of organizational glamor checks out. The new shiny often gets the most attention. Often because it's trying to cross the gap from non-existence and out of mind to mind share and adoption, so marketing budget (literal but also emotional and attentional). However, the start of a project is usually far simpler, less constrained, and easier. Most of the truly hard trade-offs and the impacts from decisions don't come due until later. Not only this but the deep learning comes from observing these outcomes and starting to understand how the decisions in the beginning come together.


Imo number 1 thing that helps a maintainer on the resume is to say they brought in some number of revenue, and all their bug fixes (especially taking out the fires of the other engineers doing features) saves the company a dollar amount or guaranteed a dollar amount of ARR.


The trick is that in many cases the value delivered is invisible and unmeasurable. How do you quantify “time saved by not having bugs”? But that is what great maintenance does. Or, the same for “time saved by a really well-designed API that makes it easy to do the right thing and harder impossible to do the wrong thing”? Again: not measurable! ”Just put a number on it” is the kind of facile response I consistently get from too many folks in management when trying to have these kinds of discussions, and the annoying-but-inescapable reality is that it is not always possible to provide a monetary number on the value of this sort of work. Despite that value often very likely netting out in the millions or more every year!


> ... it is not always possible to provide a monetary number on the value of this sort of work. Despite that value often very likely netting out in the millions or more every year!

Hm. You fist state, that it is not possible to provide a monetary number, then you state it is very likely netting out in the millions -- which is providing a monetary estimate.


If the value of one thing is somewhere in the range of $1-$100, then its value is hard to quantify.

But if you have one million of those things, you can still say "it's very likely I have value in the millions of dollars or more here".

The same logic applies here. All that has to be true for "we have millions here" to be plausible is that (1) the value of each individual, unquantified contribution is positive and probably >$1 (2) there are probably millions of such contributions. You don't have to be able to quantify with any precision any individual contribution.


Then put on the imprecise number? If you are looking for precision in estimates for the impact of projects you worked on, the vast majority of hiring managers reading resumes won’t care. They’re already going to be mentally sorting impact into broad buckets


This is a totally reasonable response! So let me elaborate a little on how these things can be true at the same time.

1. Imagine a scenario where there are two versions of an API: one is bug-prone, the other is “correct by construction”—you literally cannot call it the wrong way.

2. Assume that for some percentage of the “invalid” versions of the bug-prone API are called, the result is something that ends up going wrong in production and taking 3 developers an hour to resolve. (This kind of level-of-effort is not at all unusual in my experience dealing with on-call at both a mid-sized startup and at the scale of LinkedIn!) Let’s call it 10% to pick a reasonably small number: only 1 out of 10 bad invocations for this API put us here.[1]

3. Assume the API is fundamental to some key library (a JS framework you use, for example), so the calls are proportional to the size of the code base. Again, pick a fairly low number: 1 mistaken call every 10,000 lines of code. If we are looking at LinkedIn’s front-end, that puts us on the order of well over 10 of these that actively cause this problem (over a million lines of code with a 0.1% “hit” rate and a 10% “blows up” rate).

4. Further take an average developer compensation of $150,000/year. (This is low for big tech, but again, it gives us a useful baseline.) This is ~$75/hour.

Put those together, and you’re talking about 100 incidents × 3 developers × 1 hour/incident × ~$75/hour/developer = $22,500. That’s one repeated bug over the lifetime of the program in question.[2] That excludes the other potential business costs there: what happens if that also impacts revenue in some way—say, because it prevents sales, or means lots ad revenue, or results in an SLA violation?

Add that up across the whole surface area of a codebase—dozens and dozens of bugs, across however many users and lines of code—and you’re talking real money. A million dollars is just 450 of those kinds of bugs with similar “blast radius” and occurrence rate. This is the kind of rough mental math that leads me to talk about “netting out in the millions” benefit-wise. Thus far you could imagine “putting a number on it”.

Where it goes wrong is: with the good version of that API, the bug never happens. There is nothing to measure, because our reasoning has to deal entirely in counterfactuals: “What would it have cost us if we had a bug in this particular part of the framework?” But you can do that ad infinitum.

More or less every part of a library can be more or less buggy, more or less easy to maintain, more or less amenable to scaling up to meet the needs of an application which uses it, more or less capable of adding new capabilities without requiring you to rewrite it, etc. The part that is impossible to measure is the benefit of all the “right” decisions along the way: the bugs you never saw, indeed never even had to think about because the API just made them impossible in the first place.

Nor can you measure “this API is easy to use and never breaks my flow” vs. “I spend at least a minute looking up the details every time I have to use it… and whoops, now I’m on Reddit because I switched to my browser from my code editor”. Nor can you measure the impact of “This API makes me angry” vs. “This API makes me actively happy” on velocity. The closest you get are proxy measures like NSAT surveys which tell you how developers feel overall and interviews where you can ask them what their papercuts are; but neither can be translated into dollar values in a meaningful way. And “putting on the imprecise number” (as a sibling comment down the thread suggests) is impossible for these kinds of things: there is no number.

[1]: Lest you think I am gaming this, I have real APIs we really deal with in mind which are so error prone that we deal with bugs like this from that specific API at least once a month.

[2]: Off the top of my head, I can think of half a dozen APIs we use very actively in production which have these kinds of problems. I have eliminated a fair number of them in my tenure, but demonstrating the impact is… well, see above.


I am still not convinced that your examples show that it would be unreasonable to estimate their monetary impact.

Since a company is an organic whole, every functional part of it would netting out in the millions if considered in isolation. However, such a perspective is usually without practical significance, as it is not linked to concrete business-relevant scenarios. If there is no risk[1] that something that functions properly could fail, then the costs associated with such a failure never occurring are irrelevant.

I would also concede that many things cannot be estimated accurately or might be very hard to estimate. But in my experience, the really difficult decisions are those that relate to new big and complex things, such as what technology to use for an innovative product. Evaluating whether it is worth improving a specific detail of an existing application is most of the time far less difficult.

Let me give you an example from my current work: It is a business application to process customer enquiries that result in an offer for a tailored product. In a specific scenario we know that we can process 5 enquiries per hour. The goal is to process 6, an increase of 20%. There are about 6,000 enquires per month, meaning saving 200 staff-hours per month. The hourly costs for software development are about 4 times the costs for the staff using the application. That means that for every 50 hours it would take me to reach that goal, the break-even point would move by one month. I estimated that I could reach the goal by putting betwenn 100 and 150 hours into it. This precision was enough to get the green light from the management.[2] And management does not really care how I reach that goal in detail (by improving the performance of the database, by reworking the user interface, by using better templates as a basis for the tailored products, ...). And even if my estimates were off by a factor of 2 or 3, it would still be worthwhile to attempt the improvement.

Regarding your case about the quality of an API, I cannot see a fundamental difference to the case I just described. Set some time and/or quality goals for the improvement of the API and attache a reasonable price point to everything. Than see, whether it makes economic sense at all, whether something else promisses a better return on investment, or if this is the best thing to do now.

Finally, I would like to emphasise that the correct thing to do is never only purely a question of technology. Notice, for example, how the assessment of the case I described above changes with the number of enquires per month. Were there only 1,000 enquires per month, the break even point would be 6 times further away, which means that there might probably exist a lot more other fields more worthwhile for the company to invest their money in (and not all in IT).

[1] More precisely, the risk is seen as marginal or irrelevantly small, or if it occurs there is no way to manage it anyway (a meteorite hits the factory), or circumstances are so fundamentally changed that the entire business model is called into question.

[2] Actually it was the other way round: The management came up with the idea to improve the process by 20% and already had some suggestions how to do it. Then I looked at it and gave my rough estimates and own suggestions what could be done.


> I'm automatically thinking it's unglamorous, and marking a career plateau (perhaps decline).

I think to do serious maintenance work one would have spend lot of years in industry/company/project etc. If these people are just motivated by glamorous work, ever increasing career growth, they don't seem to me great maintainer material.

I apply this to me in narrow way to support a long running in-house enterprise product which I am kind of maintaining for many years. I just do not feel motivated by glamorous role and climbing on greasy corporate pole.


> One concern I have is that, every time he talks about the maintenance roles, I'm automatically thinking it's unglamorous, and marking a career plateau (perhaps decline).

For the rare and lucky few, it could be a stepping stone to product development. However, I've never seen someone jump from product development to a maintenance role. I think most developers would view it as a demotion and be insulted by it.

> including making a case for why their comp should track with some of the new-product-star people.

Never. The development team knows far more about the product and the business aspect of it than the maintenance team. If there is an issue that maintenance cannot handle, they reach out to the development team for a reason.


Companies aren't huge on paying open source developers in the first place. From their perspective, the primary benefit from open source software is free labor.

I can't imagine they'll be enthusiastic about paying open source maintainers.


> companies often have an incentive structure that rewards novelty, especially in the form of features if not entire products.

Unfortunately, this is true. Every developer has to fit into the everyone-does-everything mold, and if you don't you will not get good rewards. In reality developers are diverse: some are highly creative, some are very good at tracking down hard bugs, some are very good at devops, and so on. Not allowing for this diversity, and not having different tracks for developers to grow is tragic along multiple dimensions: People who are good at devops and enjoy doing devops don't see a growth path, so they leave. People who are creative and would prefer to spend most of their time doing creative work, can't because they are expected to do devops as well. Allowing for diversity of talent, and having growth paths for everyone would make for a stronger team.


I've always worked on teams where managers try to allow folks to play to their strengths, but ultimately folks can't entirely focus on just doing what they like to do. Otherwise you end up with unfair division of effort or some important things no one wants to do not getting done. Diversity of experience also allows you to better understand things which may make you more efficient overall. For instance, being forced to do some performance work will help you make better tradeoffs when doing design later.

I think the most important things are to voice your preferences to your management and try to pick projects that are in a stage of their lifecycle that have needs for your strengths. If you prefer creative work, find a project early in its lifecycle with less baggage to way you down. If you love hardcore debugging, find something which is growing aggressively. If you like maintenance, find a mature product to help steward.


> ultimately folks can't entirely focus on just doing what they like to do

True, but companies can allow that when there is enough diversity in the team, instead of insisting that everyone fit into the exact same mold.


I suppose it varies a lot from one organization and industry to another. My experience is that managers don't like it when people rock the boat, they prefer their subordinates to just quietly execute the tasks given to them. Growth-oriented people like myself are sometimes seen as a problem because they cause things to happen that are not in the road map.

In my field (fin tech) managers often do not have the background to be able to assess the value of spontaneous technical contributions. So they assume that if something was not planned and requested by management it did not need to be solved.


> Growth-oriented people like myself are sometimes seen as a problem because they cause things to happen that are not in the road map.

Creating new work that wasn’t in the roadmap (excluding tech debt and other necessities to get roadmap work done) is a problem.

The right way to grow is to learn how to work with the company to get important work into the roadmap.

I’ve worked with some peers who had good ideas and good intentions, but they’d unintentionally try to blow up the roadmap and reset planning by prioritizing their work over the things we needed to get done.

Working with the business to get things prioritized is a necessary skill. A lot of engineers just want to work on whatever they want to work on most, but that’s a problem in the context of an organization trying to coordinate.


> Creating new work

I am not talking about creating additional work, I am talking about solving problems not on management's radar screen. Some problems are only visible from the floor.

> The right way to grow is to learn how to work with the company to get important work into the roadmap.

That is not always possible because valuable things sometimes have to be demonstrated to be understood. Not all things can be explained in the abstract, sometimes you have to build the thing first before people understand how useful it is.


I agree with this sentiment. A good (software) engineer needs to have the discernment to know when to ask for permission, and when to ask for forgiveness.


A software engineer only needs to make those judgement calls in a bad environment. In a good environment, such situations can be discussed in the open.


Well if in 80% of enterprises engineers do not talk in open that good vs bad environment hardly matters.


Sounds to me like you're getting micromanaged.


I fail to understand how you manage to extrapolate my working conditions from a generalization of what soft skills are required to be a good engineer.


It depends on the business. Many places have no idea wtf they want, and presented with something interesting they’ll ditch what they are doing to do it, because they don’t know why they are doing whatever they planned anyway.

The existence of this thread belies this. Running everything like a product is the fad today. It’s a fad because running a “product” means understanding its lifecycle and resourcing it as appropriate. But the mandate in BigCorp is to run printer ink fulfillment with the same methodology as an actual product, so lots of leadership time is spent thinking about toner or whatever.

It’s inefficiency created in the pursuit of efficiency via control.


It's not a problem, that's how you create software. You can put some of those initiatives on a "road map" if you want, but there must be space for them. 50% "slack time" is a good standard for software engineering. Your software engineers likely know better than your middle-managers how to spend that time.


You can't get important work into the roadmap while also complaining that people are trying to blow up the roadmap when they try to get important work into the roadmap.

Kind of a big problem here, as you're defining the right way as also the way that frustrates you (and assumingly others) the most.


> A lot of engineers just want to work on whatever they want to work on most, but that’s a problem in the context of an organization trying to coordinate.

May be it's actually a management challenge to turn this enthusiasm into money?

What's a problem for one person makes an opportunity for another person.


> My experience is that managers don't like it when people rock the boat

That's what's missing from Graydon's analysis: risk-aversion is also a strong incentive in many corporations. I would argue it's the rule for middle managers, with growth being the exception.

Also missing is telemetry and coordination, where companies use FOSS to find out what other companies are doing, or to coordinate policy, esp. when they fund a leading contributor from whom other companies need buy-in.

Put another way, a FOSS contributor is not an individual, they are a company representative, and their opinion has the weight proportional to the companies' influence.

The contributor's influence also depends on the composition of the other contributors. Alternate influences become impossible when a company dominates the contributors; hence e.g., the CNCF tracks metrics for ensuring that it takes a plurality to dominate.

But really this posting isn't about FOSS contribution at all; it's about the under-valuation of avoided future costs. But that's a much harder problem, because you get all sorts of illusory accounting when people project potential costs they're avoiding.

E.g., I um heard that at (big firm with ~1000 developers), the QA team was successful arguing for additional funding because they were finding more bugs. So the kernel team started tracking edits as fixing potential bugs, to restore the balance of funding. They hated the game, but had to play it.


The problem is they can’t justify your cost if you go outside the planned work load. They could probably still quantify it but it’s difficult to assess your impact when your effort doesn’t count towards velocity and delivery of business outcomes. If you work does impact those things, it should be a ticket/story/task so that the impact of work can be measured (seen…). I would suggest, in the future, adding these things to the backlog as you come up with them and bring them up during planning.


> The problem is they can’t justify your cost if you go outside the planned work load

Cost? It's a freebie. I'm still doing my tasks, in addition to saving them tons of money with better tools.


The measure is called an accepted pull request. You don't need a ticket to submit a patch to the Linux kernel. If you're in a dysfunctional agile micro-management environment with "stories" and "backlogs" then look for a real job.


"managers" -- if there are managers on top of the maintainers, then, yes, it won't work.


[flagged]


> but it’s one you made up.

You just made a bunch of things up.


You need to discuss with ur manager about the priority of the tasks. This also help u with the bookeekping.

When things break u can say that u have already had this discussion and it was not ur responsibility to fix it at the time


Biggest problem I think is really staffing the infrastructure tasks sufficiently.

You will get bled down to practically no headcount being allocated to infrastructure, with all the headcount assigned to "big bets" on the non open-source products and the "skunkworks" projects trying to pivot the company into something new.

Even when we had a team come in and get assigned to pick up a neglected piece of old technology instead of focusing on getting the maintenance solid and fixing all the shit in the backlog, it was all "big bet" features and when those fizzled the team got slowly cut down until the project failed.


Isn't this somehow a modern, self-inflicted disease? FOSS used to be developed to very high standards by individuals, without relying on expensive CI pipelines.


I'm using "infrastructure" in the same sense that the author described:

> ... as infrastructure -- triage and fix bugs from the backlog, optimize performance, increase security and reliability, pay down tech debt, simplify and automate ongoing maintenance

And CI pipelines don't need to be particularly expensive, and they're pretty critical really if you're building "infrastructure" in that sense. Otherwise you're just shipping code off to your customers to be the CI pipeline.


Fundamental problem of management is you would like to have an incentive structure for the impact of an individual BUT the most impactful people usually aren’t out highlighting their accomplishments, they’re in the background helping everyone else be successful.


But if they're helping everyone else be successful, surely a managers who cares about that will know that's what they are doing and be able to incentivize them to continue?


Even the good managers have to build cases for promotion, raises, and recognition...


Totally. If managers really believe this "glue" role is valuable, they should be able to advocate for that belief. But it's definitely true that they may well fail, if the broader culture doesn't agree.


A request: when commenting, please try to share as much context as you feel comfortable.

1. Organizational context: Are you sharing observations based on an enterprise environment? A Silicon Valley big tech setting? A startup? Something else?

2. Role context: What you see depends on where you sit.

3. Experience and trust level: Contributors in high-trust environments tend to have more leeway. Tech people from known companies might get a lot of credibility for free. A lot depends on the technological version of the Overton Window, by which I mean "the range of ideas politically / organizationally acceptable to the mainstream population at a given time / the window of discourse."

4. Whatever captures the context best: whether it be risk, personalities, regulatory constraints, funding pressures, legal issues, whatever.

Such context is very beneficial for situating and synthesizing. Thanks.

Personally, I've seen a range. Sometimes I push too quickly or lack political capital, leading to conflict with existing priorities and plans. Sometimes I recommend more discipline and mindfulness about process, which some interpret as being constraining. It is very situational.


Also titled “Curse of the CEMBI”, where the acronym is for ‘Corporate-Employed Maintainers with Bad Incentives’.


Unfortunately missing in the article is any plan for this to actually happen: for stability and quality to be valued by the CEMBI when their boss demands proof of “impact”.


The word impact has become a trigger to me. It is both sufficiently abstract and seemingly concrete, and allows managers to do pretty much what they want from it.

I remember one telling me "I'm all about impact", and then praise people who built arguably impressive but totally useless and uncalled for shit, while others who sacrificed their souls preventing entire rotten stacks to fall got a "meh, they had a huge opportunity and didn't seize it".

An impact is what's happening when something hits something else. I prefer not to have impacts.


A step further: The most spectacular impacts are when something crashes and burns.


I have previously worked at a place where some of the engineering teams would build incredibly poor products and features, which would be on fire all the time (because it would fall over if you looked at it wrong), and they got constant praise from management because they “put in the extra mile to fix an issue in prod”.

An issues they caused.

It’s not like they were operating at a velocity greater than other teams-they spent so much time and headcount putting out fires, that they instead spent 10 minutes thinking about how to build it better, they probably would have 1. Finished it sooner and 2. Ended up delivering more features and value as a result.


A good counter to this is having a strong technical IC leadership roles(I.E. Staff IC or similar) that leadership regularly gets input on in terms what is impactful in areas like this. The key piece is the work is low-visibility when successful and high visibility when it goes all wrong. A strong technical organization will recognize that and leverage high-judgement individuals close to the technical work so that they it can be accounted for properly.

Mekka talks about this in terms of underrepresented groups[1] but the same principals apply as well to any low-vis when successful / high-vis when it goes wrong(I.E IT, Ops, etc) roles. It's really up to technical leadership in an organization to make sure that they're noting high-impact/low-viz work and surfacing that in a way that the "new shiny" doesn't drown it out. I've been in both domains(high-vis graphics/shiny!, low-vis tooling/devx that had huge impact) and you really do need to account for it properly otherwise you'll have exactly the situation described in the article. Your company/org will falter and stumble as those infrastructure pieces slow everything down if you don't retain/reward the people doing that work.

[1] https://mekka-tech.com/posts/2018-08-09-the-difficulty-ancho...


It's often important to help management define what impact is rather than to let them define it for you. If you want quality and stability to be included in impact, define a way to measure it and then get management to sign off on it being impact worthy. Our infra team has dash boards for keeping CI times low, false positive and false negative rates low, etc. Simply not regressing on those metrics is a daunting task and they have managed to convince management of that fact. As a result, their impact is measured by those dashboards which they defined. You can do this on any team by defining quality KPI such as the rate of customer reported bugs, customer sentiment, number and length of outages over some time period, etc.


All those metrics can be gamed to death and none of them sound impressive to the C-suite. Even if they accept that those are your KPIs, they will not think of you as an "A" player if all you want to do is maintain a number.


In some more mature fields of engineering most of the practitioners are maintainers.

Think of chemical engineering, each of the existing chemical plants are run by a team of engineers. Their job role is akin to “maintenance”, but they are still viewed as essential. They probably bring in more profit than those few who build/design new plants.

With the trend now towards extremely expensive compute systems, like large language models, will we also see the trend in ML where most of the engineers are working on “maintenance” rather than designing/building new from scratch?


Great article. Maintenance work is what keeps things working day after day. It is way more important than novelty.


This reminds me of the episode of Last Week Tonight where John Oliver points out the problems America has on infrastructure maintenance: https://youtube.com/watch?v=Wpzvaqypav8&si=W1TxMMu26rQNM6PC


I wanted to like this, but:

1. it didn't make a great case for infrastructure working (they were on one angle with disasters, but ended up with one middle-aged bicyclist killed by a pothole, and some UCLA partier students enjoying a little wading pool water);

2. didn't suggest a plan of action;

3. was mostly poorly-executed gags, and a few potshots at politicians, diverting from any kind of critical thinking or action beyond impotent tweeting.

I strongly suspect that Daily Show style news-tainment has unintentionally been dumbing down what should be a very active left (while Rupert Murdoch and talk radio cynically did something analogous to what would become the right). Now people intuitively feel powerless, except to Tweet zingers at the imagined enemy.

How about: infrastructure is important because (off top of head)... disaster threats (cite some real-world examples, which exist), safety (e.g., drinking water), economic benefits from functioning infra (e.g., transportation efficiency), quality of life, social justice (cite real-world examples of poor areas, and how that marginalizes them), national sustainability (tie it into restoring can-do know-how, and manufacturing capability), with side benefit of creating worthwhile jobs that should already exist.

And don't drop the ball just complaining "oh, those politicans being politicians" and leaving it at that, when a politician says they haven't yet found money for it. Nor try to use the kind of people who'd call in to a TV news program (and get selected to be put on air) as representative of anything other than people who'd call in to a TV news program. The citizen is left with a muddle-headed idea that same-ol'-same-ol', and not informed to do anythign about it, other than make bitter jokes about the perceived adversary.

Graydon referenced West Wing, so I'll try: (context: backstage of Presidential election debate, incumbent meeting opponent GWB character): https://www.youtube.com/watch?v=wvr1T1sFvEg


I mean thats the whole schtick of JO. This is serious in same sense as few of my friends who think that by listening book summary in 5 min they have gained serious knowledge and being effective with time management.


"just let AI do it bro"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: