Hacker News new | past | comments | ask | show | jobs | submit | enugu's comments login

Don't think you should be intimidated just by reading the article itself. It is using several domain specific terms. But you would encounter that in many other contexts - for instance a group discussing an intricate board game you are seeing for the first time.

However, unlike board games where the concepts can be explained to you in a few minutes (usually it becomes clear only when you play), a lot mathematics especially algebraic geometry/number-theory can have many layers of dependencies which takes years to absorb.

It would be interesting to compare it to understanding a large open source project like the Linux kernel, well enough to contribute. I would say it is not so conceptually deep as mathematics of the article (while still having a few great ideas). But understanding the source would require familiarization with 'tedious details' which incidentally, what this article is also about.

So the issue, stated this way, is not so much raw talent as time and effort. This leads to the topic of motivation - finding something great in an idea can lead to investment in the subject. For those more talented, the journey might be easier.

Alan Kay's maxim is crucial - a change of perspective can simplify things relative to 80 extra IQ points. A long sequence of technical steps can be compressed into a single good intuitive definition/idea like the complexity of navigating a path becomes clear from a higher viewpoint.

Grothendieck, considered by many to be the best mathematician of the past century, made the point that there were several colleagues who were more proficient at symbolic manipulation. But, he was able to look at things with a more foundational perspective and could make great contributions.

Here's a good essay by Thurston "Proof and Progress in mathematics". https://arxiv.org/pdf/math/9404236

He discusses this problem of being intimidated by jargon.


One of the motivations of copyleft licenses like GPL which are a part of open source, was the freedom for users to see and modify code. The fact that the user works with the software in a browser rather than inside a native app is a technicality which shouldn't mean that the principle becomes invalid.

Of course, this makes it harder for developers to monetize their work. But, instead of framing the discussion in terms of these conflicting interests and finding a balance, the term 'open source' becomes a debate target (even though the OSI definition includes AGPL which is also radioactive if one wants to monetize the work.)

So we have three parties 1) Users 2) App Developers who write commercial closed source code for the user facing app 3) Dependency Developers who write code used by App Developers.

(There is a simplification here as 2 can be a startup writing selling a closed source dependency used by other developers)

Just as App developers would like to monetize via user payments, some licenses allow the same option for Dependency developers while simultaneously allowing source code to be available and modified.

The basic idea behind such a license is 'free of cost and inspectable/modifiable code for almost all users, but commercial for large companies making a significant revenue from the software'.

There needs to be some work done, to make the license predictable - which users it requires to pay, and the price involved.


It is absurd to call this license change stealing when the previous work is still available under the original license. This is more like someone who is giving to the community, still continuing to give, but with a slightly more strict license. Do you expect someone who is doing philantropic work, gets contributions from others, but later becomes less philantropic to change their name?

> NEVER NEVER NEVER have had the traction

There is plenty of even closed source software which has traction in many domains, let alone software released under a license which as anti-rez points out allows the users to freely run their websites on redis, modify and redistribute code etc.(with the exception of running hosting services like Amazon).

For instance, it would be amazing and a great improvement if there was a top-quality CAD program with a similar license to Redis.


In this discussion of a specific point in the post, the promise of Hylo language and mutable value semantics can be overlooked.

Namely, we get a lot of the convenience of functional programming (mutating one variable doesn't change any other variable) with the performance of imperative languages (purely functional data structures have higher costs relative to in-place mutation and are more gc-intensive).

https://docs.hylo-lang.org/language-tour/bindings


OCaml is not an unsophisticated language. It inherits the features of ML and has first class modules, which is not present by default in Haskell (present in backpack). Not having first class modules leads to a lot of issues.

Also, there is a better story for compilation to the web.


OCaml's type system is quite janky and simplistic compared to Haskell's. The first class module system is fairly nice, although it leads to an annoying problem where now you kind of have two "levels" to the language (module level and normal level). This is arguably analogous to Haskell having a "term level language" and a "type level language", where the type system is more prolog-y than the term language. Also, Haskell's type system is powerful enough to do most of the things you'd want the OCaml module system for, and more. I do occasionally miss the OCaml module system, but not most of the time.


Conversely, the Ocaml module system is powerful enough to do all the things you had want to do with Haskell except the Ocaml module system is nice to use.

Anyway, the issue has nothing to do with relative powerfulness. The issue is that the Haskell community encourages practices which lead to unreadable code: lot of new operators, point-free, fancy abstraction. Meanwhile, the Ocaml community was always very different with a general dislike of overly fancy things when they were not unavoidable.


> except the Ocaml module system is nice to use

This comment doesn't lead me to believe you've ever worked in an ocaml shop. It's only "nice to use" for trivial use cases, but quickly devolves into a "functorial" mess in practice

> the Ocaml community was always very different with a general dislike of overly fancy things when they were not unavoidable

This is the exact thing that people always say when they are coping about their language being underpowered.


If by "encourages" you mean "has features", then yes. The typical haskell shop doesn't really encourage complex feature use, it's the people learning/online who don't actually need to work within their solutions, do. That's what seems to draw (some) people to haskell.


"Make illegal states unrepresentable" can be done by encapsulating the variables inside a single data object(struct/class/module) and only exporting constraint respecting functions. Also, Algebraic Data Types can be present in FP/non-FP languages.

The Result monad can be implemented in any static language with generics (just have to write two functions) and in a dynamic language this is easy (but return will have to be like T.return as there is no implict inference).

I didn't get the relation between FCore/IShell and DSLs, the main requirement for FCore is a good immutable library. Macros help DSLs though that is orthogonal.

But really, my main point is that OOP vs FP is red herring as 3/4 aspects which characterize OOP can be potentially done in both OOP and FP, with different syntax. We shouldn't conflate the first 3 with the 4th aspect - mutability.

An OOP language with better extension mechanism for classes +immutable data structure libraries and a FP language with first class modules would converge. (ref: Racket page below and comment on Reason/OCaml down the page).

See Racket page on inter-implementability of lambda, class, on the unit(ie. a first-class module) page here (https://docs.racket-lang.org/guide/unit_versus_module.html). Racket has first class 'class' expressions. So, a mixin is a regular function.


FP nerd: The pure core is nice and composable, with the imperative shell at the boundary.

State Skeptic: Yes, But! How do you compose the 'pure core + impure shell' pieces?

FPN: Obviously, you compose the pure pieces separately. Your app can be built using libraries built from libraries.... And, then build the imperative shell separately.

My take is that the above solution is not so easy. (atleast to me!) (and not easy for both FP and non-FP systems).

Take an example like GUI components. Ideally, you should be able to compose several components into a single component (culminating in the app) and not have a custom implementation of a giant state store which is kept in something like Redux and you define the views and modifiers using this store.

Say, you have a bunch of UI components each given as a view computed as a function from a value and possible UI events which can either modify the value, remain unhandled or configurable as either. Ex: dialog box which handles text events but leaves the 'OK' submission to the container.

There are atleast two different kinds of composability (cue quote in SICP Ch1 by Locke) - aggregation and abstraction. Ex: Having a sequence of text inputs in the document(aggregation) and then abstracting to a list of distances between cities. This abstraction puts constraints on values of the parts, both individually(positive number) and across parts(triangle inequality). There is also extension/enrichment, the dual of abstraction.

This larger abstracted component itself is now a view dependent on a value and more abstract events. But, composing recursively leads to state being held in multiple layers and computations repeated across layers. This is somewhat ameliorated by sharing of immutable parts and react like reconciliation. But, you have to express your top->down functions incrementally which is not trivial.


FP is not a silver bullet. GUI is the classic OOP showcase.

> Ideally, you should be able compose them several of them into a single app and not have a custom implementation of a giant state

If you are suggesting that components store their state, I'm not sure about "ideal" there. That works well for small GUI applications. In GUI applications of modest size, you do want a separate, well-organized and non-redundant data layer you can make sense of, at least from my experience. Qt, specifically, allows you to do both things.


This is a digression, but regarding OOP, my somewhat provocative view, is that it is not a natural thing, but in most languages, it is atleast 4 different concepts 1. Encapsulation/Namespace, 2. Polymorphism, 3. Extensibility(Inheritance is a special case) 4.Mutability.

These four concepts are forced/complected into a 'class' construct, but they need not be.

In particular, FP only varies on 4, but languages like ML,Clojure do 1,2,3 even better than OOP languages. Modules for encapsulation, Dispatch on first or even all arguments for polymorphism and first class modules, ML style, for extensibility.

Aside: There was a recent post (https://osa1.net/posts/2024-10-09-oop-good.html) (by someone who worked on GHC no less), favorably comparing how OOP does extensibility to Haskell typeclasses, which are not first class, but modules in ML languages can do what he wants and in a much more flexible way than inheritance!

There is also the dynamic aspect of orginal OOP - message passing instead of method invocation, but this is about dynamic vs static rather than OOP vs FP.

What OOP languages have managed to do which static FP hasn't done yet is the amazing live inspectable environments which lead to iterable development like we see in Smalltalk. The challenge is to do this in a more FP way while being modular.


Interesting link, thanks.


This page (https://reasonml.github.io/docs/en/module) is useful to see how a FP language can do what he wants. Because we have functors, which are functions from a group of modules/classes to another module/class, we can have Composition, Inheritance(single/multiple), Mixins etc.


To your main point, I wouldn't say exactly that the component stores the state. But, rather that every component provides an initial value, possible events, and a default event handler which is a function from value to value. In effect, this is partially 'storing local state', but the above pieces can be composed to create a container component.

Note that there is no option really - the app wont be reimplementing how a key is handled in a text box. But composability means that the same principle should hold not just for OS/browser components but also for higher level components (A custom map or a tree-view where there are restrictions on types and number of nodes - these should also have default handling and delegation to upper levels.)

The global store choice makes it harder to have component libraries. But, the composable alternative has its problems too - redundancy and communication which skips layers (which requires 'context' in React).


> But, composing recursively leads to state being held in multiple layers and computations repeated across layers.

True, which is why re-frame has a dependency graph and subscriptions that avoid re-computation, i.e. the data dependencies are outside any view tree.

If data changes, only active nodes (ones that have been subscribed to) will re-compute. If nothing changed in a node, any dependent nodes will not re-compute.

It's a beauty.


Doesn't skipping view layers mean that constraints held by intermediate layers can be violated?

Say a city stats(location, weather) component is held inside a region component which in turn is in charge of a product route generating component (which also contains a separate 'list of products' component).

You can't update the city coordinates safely from the top as the region component enforces that the cities are within a maximum distance from each other. The intermediate constraint would have to be lifted to the higher level and checked.

Edit: There is also a more basic problem. When your app has multiple types of data(product, city), the top level store effectively becomes a database(https://www.hytradboi.com/2022/your-frontend-needs-a-databas...). This means that for every update, you have to figure out which views change, and more specifically, which rows in a view change. This isn't trivial unless you do wholesale updates (which is slow), as effects in a database can be non-local. Your views are queries and Queries on streaming data is hard.

The whole update logic could become a core part of your system modelling which creates an NxM problem (store update, registered view -> does view update?). This function requires factoring into local functions for efficient implementation which is basically the data dependency graph.


Reasonable observations on office behaviour are being processed into glib stereotypes on culture. Employees routinely show discontent in India, (just one instance https://www.reuters.com/world/india/workers-apple-supplier-f...). So much so, that labour disputes are considered a major obstacle to corporate investment. Like Europe, India has labour laws which make people hard to fire.

There are other factors involved - if a H1B employee, whose job security is tied to the employer risks taking a 10x salary cut or more by going back home, then a fear for job security leading to such behaviour is a given.


Sure, files are a detail. But, the concept of namespace doesn't need to be attached to a file, it can just be a convenient convention to attach a namespace to a file. In live environments like Smalltalk images, there are no files.

I think what /u/dominicrose is trying to get at was that OOP bundles together things which need not be and in doing so, one loses flexibility. OOP is Encapsulation(can also be done via namespaces/modules), Polymorphism(can be done by functions with a dispatch based on first argument) and Reuse/Extensibility(inheritance is a special case, there are many other ways to build a new class/data-type from older ones, composition being one).

Often, this is not recognized in discourse and we end up with a 'OOP vs FP' discussion [1] even though the issue is not im/mutability. In fact, the discussion in article of [1] is actually about what is being discussed in this article. Should one do polymorphism via named implementations like in ML or anonymous ones like in Haskell/Rust? Inheritance in standard OOP languages counts as named implementations as the subclass has a name. Named implementations require more annotation but also have certain advantages like more clarity in which implementation to use and don't expose type parameters to user of the library (which happens with typeclasses).

[1] https://news.ycombinator.com/item?id=41901577


> In live environments like Smalltalk images, there are no files.

Of course there are: image file, sources file, changes file.

And, of course, fileOuts to share code with others and to archive code -- so we can have a reproducible build process.


OK, but files are external to the system. Within the Smalltalk environment, everything is an object and files are required as the ambient OS works with files. You can say that some objects within the environment, containing program source are playing the same role as source files in usual programming. Even there, one can have a richer interface than text/binary files.


Long ago, Smalltalk's lack of explicit support for important aspects of programming was recognised:

"Programs consist of modules. Modules provide the units to divide the functional and organizational responsibility within a program."

"An Overview of Modular Smalltalk"

https://dl.acm.org/doi/pdf/10.1145/62083.62095


Yes, I do agree with you - live image programming has to be composable/comprehensible/reproducible, and crucial state shouldn't be in anonymous objects. (I've even thinking of replacing mutable objects with with pure functions modifying a a tree of data). Types is another direction and the work on Strongtalk has proved influential for popular VMs.

But, we dont need to go back from objects to files, except for the purpose of interacting with the OS. Richer structures actually help comprehensibility. For instance, revision control operating at a structural level. UNIX would have much nicer, if something like nushell had been adopted from the beginning, and the 'little pieces' used to build the system worked on structured data.


One interesting result implies that numbers like 3^(sqrt(3)) will be transcendental (ie no polynomial will evaluate them to 0).

https://en.wikipedia.org/wiki/Gelfond%E2%80%93Schneider_theo...


Small but important correction: no polynomial with integer coefficients (equivalently, rational coefficients). p(x) = (x - 3^(sqrt(3))) is a perfectly fine polynomial with real coefficients.


Yes, I should have mentioned polynomials with rational coefficients(or indeed any algebraic numbers as coefficients due to transitivity of being algebraic).


No polynomial with rational coefficients. Of course x-y evaluates to 0 when x=y, even if y is a transcendental number.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: