Hacker News new | past | comments | ask | show | jobs | submit | kayman's comments login

If you're an emacs user and you code in emacs, magit works really well with your workflow.


I feel like development has come full circle. Initially, it was all server side.

Then we had to use javascript to do some frontend stuff.

The javascript became frameworks which templated some of the patterns to make the frontend dev easier (led by React and Angular)

Frameworks like Blazor and Phoenix/Elixir are bringing the frontend dev back to serverside.


There's a bit more to frontend than knowledge of a framework. I can find my way around a Vue or React codebase, and I'm reasonably competent with CSS and associated frameworks, but there's a deep knowledge of the DOM and vanilla APIs, W3 standards, accessibility, device and browser differences and so on that really make a good frontend developer stand out. Often it's the frontend dev in the room - not the UI/UX experts - who calls out accessibility and usability concerns, as they've spent years working in the nuts and bolts of UI/UX.

If you moved to full server-side tomorrow and used Phoenix/Hotwire/HTMX/whatever, these skills would still be invaluable.


Frontend/backend is a speed of light issue. As long as the speed of light remains constant (and I've not heard any plans to change it) there will be a need to a have some code on the client and some code on the server.


Can you expound on this idea a bit? I don't get what you mean, but it sounds possibly interesting.


The latency of a communication travelling at the speed of light from say Japan to New York, is noticeable to humans. If you've ever tried to SSH into a machine on another continent, you know the feeling.

Even if the server is nearby, there's still many things that can add latency, most of which are not controllable (e.g. wifi interference, bad routers, bad ISPs, train tunnels).

So in general you cannot move all functions to the server because it will be terrible for UX. e.g. Google Stadia.


This has helped me form my view PFOF is not as bad as it sounds.

https://a16z.com/2021/02/17/payment-for-order-flow/


I used to do limit orders - try to save some money by buying a stock when it's at lowest during the day.

Then I read a book by Fischer called "Common Stocks, Uncommon Profits"which suggested that when you have a long investment horizon, say 5-10 years, and you have done the research to have the confidence, a limit order gives you no real benefit.

Say a stock is $100 today. In 5-7 years you expect it to be $500, the benefits of doing a limit order are less significant.


There are also costs associated with placing non-marketable limit orders[1], and nasdaq has found that after factoring in the costs/benefits into account, the overall cost is pretty much the same. In that case you might as well place a market order and save yourself the time.

>the “benefits” of waiting increase roughly in line with the risks of not getting a fill. That seems to indicate the market is especially efficient at pricing liquidity.

[1] see: "chart 4" https://www.nasdaq.com/articles/an-interns-guide-to-trading-...


For someone buying ETF's monthly or something similar this is mostly correct. Your 100 share order is unlikely to move the market, and the time you spent trying to passively set up the order, and then babysit it is unlikely to be worth the time investment.

In crypto markets the difference between maker and taker fees can sometimes make it worth waiting, but you need to keep in mind that every minute you spend not having made the trade, is price risk that you are exposed to. And saving 5bps on fees might lead to you losing 50bps in price movement.


For bootstrapped projects without a PMF - which are my side projects I self host.

Postgres hot hot with 2 servers.

For clients I always recommend PaaS


Try the steelcase leap v2.

Aeron Miller was the 2nd alternative but ended up going with steelcase.

I've had it for almost a year and would still recommend it.


Similar to "bourne shell" and "bourne again shell" aka bash

I can see the evolution now. "nushell". Next iteration: "nuer shell" (newer) or "renushell" (renew)


It’s because the average person has so many passwords in various formats that they forget. But that person most likely has access to their email. Instead of taking the user on a password reset journey, just shortcut to login. The attack vector is restricted to email no matter what.


I suspected that was the case, but I really wish they'd give me an option.


I’ve answered this previously here.

https://news.ycombinator.com/item?id=24238783

In a nutshell it’s to handle federated identities.


The reason your username and password are on different pages is to handle federated identities. Take a typical saas product. Initially you build your own login username and password. As you grow your users ask to login using gmail, LinkedIn or Microsoft so they don’t have to remember multiple usernames and passwords. If you enable third party login it means you have to redirect the site to the third party login page to authenticate.

To accomodate that you design your page so the user first enters username. In your system you check based on email who the identity provider is and redirect to that login journey.

For e.g. if Microsoft you redirect to Microsoft login page to authenticate.

If successful the third party login provider will send you back to your app with a JWT. In your app you check if the JWT is valid - if so allow access.

On first entering email on login, If your login provider is your own app, you redirect to your own login password page.


JWT == JSON Web Token

I had to look it up.

https://en.m.wikipedia.org/wiki/JSON_Web_Token


That seems like a strange flow, it means the user first has to input his email on your app, then you redirect to Microsoft, user will have to input his Microsoft email and password, and redirect back to your app.

This means the user now has to remember which email he used on your app, which is not very different from remembering which third party provider you used before.

Maybe I'm missing something, but how would you explain why Google does this two step login process?


You often don't have to put in the email again, thanks to eg. the username hint.

And then, if you're already logged in according to the auth provider, you don't have to type your password either.

A good thing about tgis is that the providers can require different kinds of MFA at their discretion though.

But, what would happen to that poor app if I have a live account associated with my gmail and a google account associated with my o365 mail? ....

Come to think of it, I have an email account to which I have associated an ms live account AND an o365 corporate account, and a google account ... Very confusing ...


Typically you don't even ask a user for an e-mail for an OAuth based login. I think you're talking about OpenID Connect, where you indeed need the e-mail to know which login provider is used. I haven't seen that in the wild for a long time though, most sites that offer "Login via X" use an OAuth 2.0 based login flow, either with server-side tokens (e.g. Github) or JWT token (e.g. Google).


Nitpick, OIDC is built on OAuth


THere are many websites without the option of federated identities - the 2 main supermarkets here in the UK do it for example (tesco.com, sainsburys.co.uk).

Maybe they're just joining a trend that they don't realise has benefits to others but not themselves?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: