There is a tendency here to pattern match the advice in this article with frustrating experiences about office politics. But I dont see this as a post about politics.
Rather, what I got was that the spec is an incomplete description and there needs to be more inquiry into how the software is going to be used. This can require a little bit of 'going up the stack' to see how business need relates to the software requirement(single enterprise customer with a precise requirement, acquisition of new customer, software for some internal vision of CEO etc). Compare with a startup where one doesn't even have a spec and one is trying to find a product-market fit or when developers take sales calls and find new insights on how the software is actually being used.
A completely broken spec is indeed a failure in management process. But, in general, it helps a library developer to think beyond a library spec and see how it is being used.
Hyperfiddle and Quadratic implement some of the items on your wishlist (richer data types, modern programming languages, working with arrays, better connectivity to data tools).
We can think of the Compiler as a function from a string to a string - high level (HLC), to low level code(LLC). LLC can include the garbage collection code(if it is run as a standalone executable instead of garbage collection being done by a separated runtime).
The compiler executable itself is running in a compilation process P which uses memory and has its own garbage collection. (The compiler executable was itself generated by a compilation, using a compiler written in Go itself(self-hosting) or initially, in another language).
But the compilation process P is unrelated to the process Q in which the generated code, LLC, will run when first executed. The OS which runs LLC doesn't even know about the compiler - LLC is just another binary file. The garbage collection in P doesn't affect garbage collection in Q.
Indeed, it should be easy for the compiler to generate an assembly program which constantly keeps allocating more memory until the system runs out, while compiling say a loop which allocates a struct within a loop running a billion times. Unless, of course, you explicitly also generate a garbage collector as part of the low level code.
Your question does become very interesting in the realm of security, there is a famous paper called "Trusting Trust" where a compiled compiler can still have backdoors even if the compiled code is trustworthy and the compiler code is trustworthy but the code which compiled the compiler had backdoors.
The overtyping seems not about not about static vs dynamic but about nominal typing(functions based on the abstract type) vs structural typing/row-polymorphism (functions based on specific fields). Go seems to have structural typing, though less used. Do you have the same problem with overtyping while using structural typing?
There was a very similar complaint [1] ("OOP is not that bad") favorably comparing the flexibility of OOP classes in Dart vs typeclasses in Haskell. But as pointed out by /u/mutantmell [2], this is again about lack of 'structural typing' at the level of modules with exported names playing the role of fields. First class modules in functional languages like ML (backpack proposal in Haskell or units in Racket) allow extensible interfaces and are even more expressive than classes in most usual OOP languages. First-class modules are equivalent to first-class classes [3] ie. a class expression is a value, so a mixin is a regular function which takes a class and returns a class. Both are present in Racket.
There seems to be an analogy to servers being capable enough for usual demand(income>spending), but fails due to some downtime for some servers(out of a job) or peak loads(medical emergency). This can be amortized by having data centers serving multiple apps(social security, insurance - but they dont always exist).
One main fault in the analogy is that in an economic crisis, there is a vicious cycle of income loss which leads to lower demand leading to more lost jobs. This coordination failure can be handled by fiscal/monetary policy. Whereas server failure, even when widespread due to a virus doesn't happen recursively like that.
For sandboxing in JS, we can use sandboxed iframes or webworkers. Both of those communicate to hosted code via postmessage which serializes an object and this can be used as a base for implementing function calls.
Whereas if I understand correctly, WASM can be provided with host approved JS functions to call directly in importObject. This seems more convenient and fast.
But for a plugin system, many people would prefer to write plugins in JS itself, so for WASM plugins, they might have to be compiled to WASM first. Dont know how if there is a mature implementation of JS->WASM.
> But for a plugin system, many people would prefer to write plugins in JS itself, so for WASM plugins, they might have to be compiled to WASM first. Dont know how if there is a mature implementation of JS->WASM.
Not everyone tho, which is kinda the point of abstracting away your language for plugins. People like Python, Lua, Go, Rust, etc... Some do like JS of course, but not everybody.
Re: mature implementations, I would guess worst case QuickJS probably can compile to WASM and it's small enough that your runtime will probably offset the extra blob size.
Could this ever function as a less resource intensive substitute for Electron apps? WASM doesn't have DOM access, but could it be added in these extensions?
Edit: Maybe this question doesn't make sense as the OS would need to have the runtime installed, and if that can be assumed, we would have lightweight apps already.
Take a look at Electrobun. It’s a Tauri alternative where you write typescript instead of rust. App bundles as small as 16MB and it creates update patches for your app as small as 4KB.
Written in zig and Bun, and uses the system webview by default.
Ha e a look at Tauri for a less resource intensive electron alternative. Written in rust and supports any framework on the frontend side, with strong access control & lightweight backend.
You can often search for press releases with big name services vendor or big name software vendor + your target customer and see if there are press releases. Many large companies work with specific partners for IT services and these providers are the ones with the MSAs that you can attach to.
Examples: Cognizant, Tata, Accenture, etc.
It won't be easy unless you know someone on those teams already. Sometimes if you find a customer that loves your product but cannot procure through a startup, you can ask them if they can refer you to one of their partners and see if you can work something out with them (most of the time, those partners will want an arm and a leg).
This is something I think about, and there are some underlying technical issues as well, in coordinating all the plugins with common interfaces. We do not have workflow centric software even in the open source world which has managed to build a large family of apps in the usual mode.
Or maybe we have done it in the past(interactive software in SmallTalk), but have forgot about it.
Also, the business reasons are not prohibitive - if lot of users use the workflow model, there can be a store where they can request, raise funds for plugins with a specific functionality. Developers won't ignore it, even if the moat is weak, as there is potential revenue. It would be like consultants providing solutions rather than selling a product.
In the 1980s and 1990s there was a ton of very interesting work done on deeply thought out user-centric software designed to augment human intelligence and give people maximum control over what they were doing while still being approachable. The Smalltalk stuff was some of it, but there was some pretty spectacular stuff back in the old Windows 3.x, macOS classic, and even MS-DOS days where apps would interact richly and you had document-centric customizable work flows. You even had things like (gasp) composability of applications in GUIs.
All of this was completely abandoned and forgotten because there's no money in it. Make software like that and there's no moat, and make it local and people will pirate it. Lock it down in the cloud and lock down the data and people will pay you.
That which gets funded gets built. We get shit because we pay for shit. People won't pay for good software because the flexibility and user-centrism of good software allows them not to.
This is a very cool video. I am interested in making a dynamic notes app with user defined types and cutomizable flows. This is partly done already in Roam, AnyType... if you include the plugin system. But it seems like there is a lot of history.
Nevertheless, I am more optimistic about such a product. The key issue is building composable interfaces to which plugins can glue in.
Piracy is not such a big problem as one can give the product free of charge to users and charge for hosting or charge commercial businesses for whom piracy is not a risk worth taking.
If anything, the business implication of a ubiquitous workflow interaction is its threat to the ads model (users dont visit social media sites but read in data via apis). The ad model might need to be replaced by something like micropayments to content creators, but that is a distant issue.
This post doesn't seem like a criticism of FP so much as the module system in Haskell, ML has a more powerful module system.
On r/haskell, user mutantmell gave a implementation of the code (assuming such a module system) closely following the Dart code given in the original post.
>(Java-style) OOP offers a solution for this: you can code against the abstract type for code that doesn't care about the particular instance, and you can use instance-specific methods for code that does. Importantly, you can mix and match this single instance of the datatype in both cases, which I believe to be a superior coding experience than having two separate datatypes used in separate parts of your code.
>"Proper" module systems (ocaml, backpack, etc) offer a better solution that either of these: when you write a module that depends on a signature, you can only use things provided by that signature. When you import that module (and therefore provide a concrete instance of the signature), the types become fully specified and you can freely mix (Logger -> Logger) and (SpecificLogger -> SpecificLogger) functions. This has the advantage of working very well with immutable, strongly-typed functional code, unlike the OOP solutions.
>This is in essence the same argument for row-polymorphism, just for modules rather than records. It can be better to code against abstract structure in part of your code, and a particular concrete instance that adheres to that structure in other parts.
Rather, what I got was that the spec is an incomplete description and there needs to be more inquiry into how the software is going to be used. This can require a little bit of 'going up the stack' to see how business need relates to the software requirement(single enterprise customer with a precise requirement, acquisition of new customer, software for some internal vision of CEO etc). Compare with a startup where one doesn't even have a spec and one is trying to find a product-market fit or when developers take sales calls and find new insights on how the software is actually being used.
A completely broken spec is indeed a failure in management process. But, in general, it helps a library developer to think beyond a library spec and see how it is being used.
reply