Hacker News new | past | comments | ask | show | jobs | submit | more 87zuhjkas's comments login

I don't know, it reminds me of LiveScript plus goroutines. http://livescript.net/


> Once you get used to biting the bullet and hitting escape instead of jj ... it just works, everywhere.

Well you can also invest the small amount of time to type in:

:imap jj <Esc>


> An attacker controlling another terminal

How about controlling not another, but the same root terminal via send keys without tmux with another xorg terminal window?


Wayland fixes that and is rapidly phasing out Xorg. It is long understood that X is not secure in this kind of scenario, it is NOT long understood that tmux isn’t (Or at least, I certainly have never heard this.)


That threat model essentially prohibits "tmux attach", which allows an attacker running as your user to connect to your terminal session, so I don't think it's a particularly useful threat model here. That's basically exactly what we signed up for by using tmux.


This is definitely a useful threat model because people are running tmux on servers and almost certainly do not realize that this can happen.

You do appear to be correct that it's exploitable via other, also trivial, means. That does not make the situation any less bad.


Running on a server doesn't change anything, you'd need to be running on a server where you routinely give people who shouldn't have root access, access to an account with sudo privileges with a password. And be relying on your attacker to not say, simply put aliases into your shell, replace your shell, modify your path, add an LD_PRELOAD, ptrace your processes, etc.

That should be absolutely no one.


I agree, GHC has the best type inference that I know of.


How about used ones from eBay?


How?


If you push to keep your code organized into modules with simple function APIs, and make sure that code outside your modules doesn't use the package internals directly, then your function APIs can be easily extracted into a REST/gRPC call, and the only thing that changes (to the first approximation) is your internal service (function) calls become external service (api) calls.

Obviously you now need to add an API client layer but in terms of code organization, if your packages are cleanly separated then you've done a lot of the work already. (Transactions being the obvious piece that you'll often have to rethink when crossing a service boundary).

The advantage of this approach is that it's much easier to refactor your internal "services" when they are just packages, than it is to refactor microservices after you've extracted them and set them free (since upgrading microservice APIs requires more coordination and you can't upgrade clients and servers all in one go, as you often can inside the same codebase).


I've been there. This only seems to 'work'--until you try to raise throughput/reliability and lower errors/latency. What ends up happening is that the module boundaries that made sense in a monolith don't make sense as microservices when communications are allowed to fail. Typically the modules are the source-of-truth for some concern with consumers layered above it. This is the worst pattern with microservices where a synchronous request has to go through several layers to a bottom-level service. With microservices you want to serve requests from self-contained slices of information that are updated asynchronously. The boundaries are then not central models but rather one part of a workflow.


Or you can do poor man's microservices and use the same monolith with different production flags to load balance it.

Keep all your code in one repo, deploy that codebase to multiple servers, but have it acting in different capacities.

1 email server, 5 app servers dishing out html, 2 api servers

Etc

It works very well and was able to handle spikes of traffic during super bowl ads without any problems.


This is the biggest favor you can do for yourself. The developer experience is as easy as production without descending into container induced madness.


Testing is a breeze too because you are using the same tools across the board.

I don't know why it fell out of fashion, but for your average web app it is the gold standard imo.


This is exactly it.


Elixir and Phoenix. The contexts pattern used in Phoenix is the most modular, easily microserviced way of structuring apps I’ve ever used. I slapped myself on the forehead when I first saw it. Duh. It’s really fantastic. Highly recommend


Do you have a good reference link to learn more about what you're talking about?



Service Oriented Monolith. I.e. you can organise a monolith in pretty much the same way you would organise a micro service architecture.


"Service Oriented Monolith", I'm loving it ! ;)


Bingo.

A properly organized codebase scales very well when partitioned into services.

It is my default approach for all new projects for the last 10 years or so.


https://medium.com/@dan_manges/the-modular-monolith-rails-ar...

His talk was pretty cool and helpful if you want to split monolith!


Agreed, i'm not sure whether one letter (immutable) greek variables are better than proper named variables.

Another consideration: A formal rule based math syntax could be checked by a computer (theorem prover).


Where widely understood, I consider a Greek variable to be vastly superior. The problem is how well it's understood. Delta is an example of a succinct variable which has a well defined meaning which to encapsulate in a named variable either requires a long name or leaving out some nuance (delta isn't 'change' which is ambiguous in English, but the difference between two measurable things.)

Keep in mind we use special single letter variables all the time, sometimes multiple for the same concept. Multiply, divide, add, subtract. You learned them all in early education along with everyone else, and now you have a universal set of symbols to express arithmetic concepts.

Would we be better off if every occurrence of '+' was replaced with 'sum'? I don't think so, but that doesn't necessarily mean that loading up with single letter variables that people aren't familiar with is better. It's entirely to do with how familiar the target audience is with them. A library targeted for use by scientists might see benefit to more of them, while one targeted towards being accissible to many people might not.


Single letter variables (including Greek ones) are great when they stand for something, or are well-known so they don't need to stand for anything.

If you stick to convention and use stuff like A for array, i for index, r for root, δ for small change (a delta), ε for error, Σ (Greek S) for sum, Π (Greek P) for product, etc.

It's only when you start using variable names as if they were free variables when they really aren't that you get into trouble with comprehension, especially from people not in your field.


It think this is somewhat similar to: many chinese characters vs ASCII. Neither is the superior notation. I prefer to write words by concatenation of multiple ASCII letters instead of one letter symbols, but that's due to my cultural background.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: