Hacker Newsnew | past | comments | ask | show | jobs | submit | seasoup's commentslogin

Languages aren’t worse, you’re just better than you were. You’re aware of all the warts that you didn’t know before.


Great topic, long topic. I could write a book on it. Wait! I did!

https://www.amazon.com/Single-Page-Applications-end-end/dp/1...


A lovely knot to unravel!

First, get everything in source control!

Next, make it possible to spin service up locally, pointing at production DB.

Then, get the db running locally.

Then get another server and get cd to that server, including creating the db, schema, and sample data.

Then add tests, run on pr, then code review, then auto deploy to new server.

This should stop the bleeding… no more index-new_2021-test-john_v2.php

Add tests and start deleting code.

Spin up a production server, load balance to it. When confident it works, blow away the old one and redeploy to it. Use the new server for blue/green deployments.

Write more tests for pages, clean up more code.

Pick a framework and use it for new pages, rewrite old pages only when major functionality changes. Don’t worry about multiple jquery versions on a page, lack of mvc, lack of framework, unless overhauling that page.


I largely agree with this approach, but with 2 important changes:

1) "Next, make it possible to spin service up locally, pointing at production DB."

Do this, but NOT pointing at production DB. Why? You don't know if just spinning up the service causes updates to the database. And if it does, you don't want to risk corruption of production DB. This is too risky. Instead, make a COPY of the production DB and spin up locally against the COPY.

2) OP mentions team of 3 with all of them being junior. Given the huge mess of things, get at least 1 more experienced engineer on the team (even if it's from another team on loan). If not, hire an experienced consultant with a proven track record on your technology stack. What? No budget? How would things look when your house of cards comes crashing down and production goes offline? OP needs to communicate how dire the risk is to upper management and get their backing to start fixing it immediately.


Yeah, having the experimental code base pointing at the production data base sounds like fun. I did that. We had a backup. I'm still alive.


This is the right way to think about it. My only disagreement is that I'd do the local DB before the local service. A bunch of local versions of the service pointing at the production DB sounds like a time bomb.

And it's definitely worth emphasizing that having no framework, MVC, or templating library is not a real problem. Those things are nice if you're familiar with them, but if the team is familiar with 2003 vintage PHP, you should meet them there. That's still a thing you can write a website in.


> if the team is familiar with 2003 vintage PHP, you should meet them there. That's still a thing you can write a website in.

You can write a website in it, but you cannot test it for shit.


If this is true, OP can consider writing tests of the website using a frontend test suite like Cypress, especially with access to local instances connected to local databases.


There's no value to retroactive unit testing. Retroactive tests should be end-to-end or integration level, which you certainly can do without a framework.


Framework are not needed to test. I've been testing and validating my code way back, in C. Not because I was an early adopter (I'm still not), but because I needed to debug it, so... faster.


Good strategy. I would suggest not hooking it up to prod DB at the start. Rather script out something to restore prod DB backups nightly to a staging env. That way you can hookup non prod instances to it and keep testing as the other engineers continue with what they do until you can do a flip over as suggested. Key here is always having a somewhat up to date DB that matches prod but isn't prod so you don't step on toes and have time to figure this out.

Note that going from no source control to first CD instance in prod is going to take time...so assume you need a roll out strat that won't block the other enigneers.

Considering what sounds like reluctance for change the switch to source control is also going to be hard. You might want to consider scripting something that takes the prod code and dumps it into SC automatically, until you have prod CD going...after that the engineers switch over to your garden variety commit based reviews and manual trigger prod deploy.

Good luck! It sounds like a interesting problem


> Next, make it possible to spin service up locally, pointing at production DB.

I think this is bad advice, just skip it.

I would make a fresh copy of the production DB, remove PII if/where necessary and then work from a local DB. Make sure your DB server version is the same as on prod, same env etc.

You never know what type of routines you trigger when testing out things - and you do not want to hit the prod DB with this.


I am inclined to agree. The other advice was excellent, but pointing local instances to production databases is a footgun.


I've kind of reconsidered this a bit. Right now, the only way to test that the database and frontend interact properly is to visit the website and enter data and see it reflected either in the database or in the frontend.

It's less terrible to have a local instance that does the same thing. As long as the immediate next step is setting up and running a local database.


But the ting is, you have no idea if not even a single GET request fires of an internal batch job to do X on the DB.

I mean, there are plenty of systems in place who somehow do this (Wordpress cron I think) so that's not unheard of.

For me, still a nope: Do not run a against prod DB especially if the live system accounts for 20M yearly revenue.


Agree with this approach. You have nginx in front of it already so you can replace one page at a time without replacing everything.

One thing I haven’t seen mentioned here is introducing SSO on top of the existing stack, if it’s not there. SSO gives you heaps of flexibility in terms of where and how new pages can be developed. If you can get the old system to speak the new SSO, that can make it much easier to start writing new pages.

Ultimately, a complete rewrite is a huge risk; you can spend a year or 2 or more on it, and have it fail on launch, or just never finish. Smaller changes are less exciting, but (a) you find out quickly if it isn’t going to work out, and (b) once it’s started, the whole team knows how to do it; success doesn’t require you to stick around for 5 years. An evolutionary change is harder to kick off, but much more likely to succeed, since all the risk is up front.

Good luck.


I think "SSO" here maybe doesn't mean "Single-sign on"? Something else?


No, I meant single sign on.

In my experience, if you can get SSO working for (or at least in parallel with) the old codebase, it makes it much easier to introduce a new codebase because you can bounce the user outside of the legacy nginx context for new functionality, which lets the new code become a lot more independent of the old infra.

I mean there are obviously ways to continue using the old auth infra/session, but if the point is to replace the old system from the outside (strangler fig pattern) then the auth layer is pretty fundamental.

That’s what I faced a similar situation - I needed to come up with ways to ensure the new code was legacy free, and SSO turned out to be a big one. But of course YMMV.


I'd add putting in a static code analysis tool in there because that will give you a number for how bad it is (total number of issues at level 1 will do), and that number can be given to upper management, and then whilst doing all the above you can show that the number is going down.


There is significant danger that management will use these metrics to micromanage your efforts. They will refuse changes that temporarily drive that number up, and force you to drive it down just to satisfy the tool.

For example, it is easy to see that low code coverage is a problem. The correct takeaway from that is to identify spots where coverage is weakest, rank them by business impact and actual risk (judged by code quality and expected or past changes) and add tests there. Iterate until satisfied.

The wrong approach would be to set something above 80% coverage as a strict goal, and force inconsequential and laborious test suites on to old code.


Many tools allow you to set the existing output as a baseline. That's your 0 or 100 or whatever. You can track new changes from that, and only look for changes that bring your number over some threshold. You can't necessarily fix all the existing issues, but you can track whether you introduce new ones.


The results might also be overwhelming in the beginning.


Solid advice. I did 2 full re-writes with great success and to add to this list I would also make sure you are communicating with executives (possibly gently at first depending on the situation), really learning about the domain and the requirements (it takes time to understand the application), investing in your team (or changing resources - caution not right away and at once since there is a knowledge gap here). The rewrite will basically have massive benefits to the business: in our case: stability (less bugs), capability to add new features faster and cheaper, scalability, better user experience etc. This can get exiting to executives depending on the lifecycle of the company. Getting them exited and behind is one of the core tasks. Dont embark on this right away as you need more information, but this will matter.


Among the things I'd prioritize is to make a map of all services/APIs/site structure and how everything falls into place. This would help you make informed decisions when adding new features or assessing which part of the monolith is most prone to failure.


Best advice so far.


This is the way.


This hacker agiles.


It’s a common misperception that a 10x engineer is 10x the average engineer. The original quote said that the best engineers are 10x the worst engineers which is easily true. 10x the average engineer is only sustainable for a limited time before either burnout, or outside life intervenes.

The truly best engineers have the experience to develop intuition around what core not to write, and have 10x impact but are not 10x better coders.


What this study shows is that when infection rates are high people wear masks, and conversely, that when people wear masks infection rates are high. Or put another way, it either shows that high infection rates cause people to wear masks, that wearing masks causes high infection rates, or that neither causes the other but some tertiary value not part of the study causes both to rise and fall in tandem.

It does not effectively demonstrate the amount that wearing a mask raises, or reduces, infection rates.


Finish your degree, then pursue computer science. Understanding statistics and data is a super power for a software engineer. There are a lot of data-adjacent software engineering jobs. Telemetry, data pipelines, data visualization applications, maps, apply your cs and ds skills together for an awesome combination.



This article doesn't take into account that maybe the kids didn't all get sick because closing the schools helped stop the spread of COVID in kids. It's like saying I exercised and lost weight so clearly that exercise was a waste of time, I would have lost weight anyway.


41% of school-aged US kids are estimated to have gotten COVID. Not such a great success.

See Table 2 of https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/burd...


We had a person running for government office in Oregon loudly proclaim over and over, "Why do we have have to have these mask mandates and rules. Oregon has one of the lowest infection rates"..

Some people just seem to get cause and effect mixed up.

Or the threads all over twitter this winter alleging 'funny numbers' because the flu cases were so low last year.. Imagine that, when everyone is social distancing, we have our lowest amount of influenza ever.. it must be a great conspiracy to manipulate the numbers..


> Imagine that, when everyone is social distancing, we have our lowest amount of influenza ever..

And yet the explanation for Covid spreading so much during the winter is popularly held to be that people weren't social distancing and were selfishly refusing to follow the guidelines. I mean, how is it that these measures were simultaneously almost 100% effective against influenza and clearly much less so against Covid?

I think viral interference is the real explanation here: https://www.insighttherapeutics.com/publications/insights/20...


You need to look at other countries. Sweden kept their schools open, and not only did not a single child die, but the prevalence among teachers was no greater than in the general population.


Isn't the Covid rate in kids exceptionally low? Closing offices only barely slowed the rate of progress in adults but closing schools helped children, who are less at risk, a lot?


> Isn't the Covid rate in kids exceptionally low?

No. The rate for ages 5-17 was actually slightly higher than that of adults 18-49, which was higher than the rate for 50-64, which was higher than the rate for 65+.

What was much lower for kids was the severity of their illness. For every 100k kids that got it, about 600 were hospitalized. For 18-49 it was about 2400. For 50-64, 7200. For 65+, 21000.

See the CDC data on the page mulvya linked in a parallel comment.


This is a plan that is much worse for employees, being presented as if it were better. At least be honest about feeling like you are paying employees too much equity up front and want to pay them less.


Basically this lets the employer retain more of the upside in stock appreciation - your equity bonus is now recomputed every year at the current stock price, and presumably expressed in dollars (not shares). So they'll say "oh you're getting 50k this year in stock", when you got 40k last year, but meanwhile the share price has doubled and your 40k in equity _would_ be worth 80k if they had given you all 4 years up front.

They say it protects against downside, but odds are if the company isn't doing well they'll cut the dollar value of annual bonuses as well.


Not also pay them less. It turns out that for the average person sometimes this four year grants when fully vested are enough to retire or to switch to a job that pays less but has better WLB.

They are paying people less because they want to keep them in the "golden handcuffs" for longer. The way it's painted by this particular VC is particularly gross.

Also executives usually keep the upside potential. So as an engineer if because of your work the company skyrockets and becomes 8x more valuable over the year you share 0% of that upside, while executives cash in. On the downside, as an engineer you can just leave because the market is very fluid, truth is you don't need job security from your employer if you are a good software engineer. If/when that changes probably compensation will go down enough so that this mega-grants are not going to be a problem anymore. They are just counting pennies.


To me this all depends on how it's implemented but you're right to be suspicious.

If all they do is give you 1/4 of the equity they were going to give you previously, then yes it drastically reduces employee upside to the benefit of others (execs, investors).

But they probably can't do that because it would be harder for them to attract talent against a 4 year vest company. Instead they'll probably have to bump up that initial grant so that when employees do the math there is still the big upside if the company improves.


I'm suspicious of this as well, but would this be better for a lot of employees? Go to Coinbase, get your 25% stock in 25% of the time, then "just" go somewhere else and get more. It's not unheard to return to a company later (Coinbase in this case) and get a better title, additional grant, etc.


I had the same thing, I switched hands using the mouse and developed it in my other hand. I switched to a trackball, it persisted. I switched to a Wacom tablet and it went away. I used a pen and tablet for over a decade afterwards and it never came back.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: