You seem to have a very specific use case in mind and I am not really sure whether Prolog is going to be a good fit, but there was recently a discussion about the language: https://news.ycombinator.com/item?id=40994552
Specifically, this online book (mentioned in that discussion) may be a good resource, I've used author's content as a reference several times: https://www.metalevel.at/prolog
In a sense, doesn't speaking naturally in front of a camera require you to be more fake?
It's not the easiest thing to do, trying to address audience without any feedback, looking at a black object sitting in a room, in multiple attempts and with random interruptions needed to fix technical issues or change shot, while pretending that it is all part of normal, continuous discussion.
People watch him because he is genuinely good at what he does, not because of his presentation or editing skills.
Hi Andy! Deno basically restored my hope in being able to understand (T|J)S tooling, thanks for working on this amazing project.
2 questions:
Support for TS in notebooks in VSCode(ium) is currently broken; are there plans for resolving issues with definitions used across different cells?
Future plans for bundler sound interesting - is it something potentially able to replace e.g. esbuild for bundling frontend applications? I'm specifically curious because that would imply (static) Deno's import resolution for browser-run apps - one can use @luca's esbuild plugin, which is nice, but interacts poorly with other, custom plugins and has some rough edges.
I don't think it's really about Latin vs Cyrillic. Cyrillic is indeed very natural even for Slavic languages that only use Latin (modulo language-specific tweaks), but same results can be achieved with diacritics in Latin - take a look at Czech, Slovak and Serbo-Croatian alphabets. On the other hand, Polish alphabet really feels painful to an outsider, but that seems to be more about how it uses Latin, not about Latin itself.
> I can also visualize images in my head but they too are typically accompanied by some language like "I am now visualizing a Hot Dog".
I can certainly have thoughts not accompanied by a language, for example visualizing graph-like or higher-dimensional operations from math/CS more quickly than I could come up with their description. Or "simulating" physical objects, or even whole visual scenarios resembling real life.
But it makes me wonder whether it once again isn't about training or being "wired" for different types of thought. And if it's training, then specific language features may as well force people to exercise and improve specific ways of thinking about problems. It's just that it doesn't have to be limited to language.
> Speakers of different languages express laugh differently in English too.
>
> Native English speakers write "hahaha" or "ha-ha-ha", but speakers of many Slavic languages write "ahahaha" or "a-ha-ha-ha", with leading "a".
>
> Native English speakers write ":)", but speakers of many Slavic languages write just ")", because ":" is reused on keyboard for additional Cyrillic letters (like Ж) and they don't use it even when typing in English.
Of course there's difference between eastern and western slavic languages, because western ones use latin. In those, I've mostly seen "haha", both when talking in english and in $SLAVIC. At the same time, they can easily write ":)".
> Monads need to wrap each other, effects are more composable
It's really trickier than algebraic effects make it seem though. Haskell-ish "monad transfomers" as a stack of wrappers may pick concrete ordering of effects in advance (e.g. there's difference between `State<S, Result<E, T>>` and `Result<E, State<S, T>>`, using Rust syntax), but effect systems like one in Koka either have to do the same decision by using specific order of interpreters, or by sticking to single possible ordering, e.g. using one, more powerful monad. And then there're questions around higher order effects - that is, effects with operations that take effectful arguments - because they have to be able to "weave" other effects through themselves while preserving their behaviour, and this weaving seems to be dependent on concrete choice of effects, thus not being easily composable. In a sense, languages like Koka or Unison have to be restricted in some way, giving up on some types of effects. I'm not saying that's a bad thing though, it's still a improvement over having single effect (IO) or no effects at all.
Being able to change the ordering of effects on the fly is a benefit of algebraic-effect systems. As you mentioned `State<S, Result<E, T>>` and `Result<E, State<S, T>>`have very different effects. Algebraic-effects let you switch between the two behaviors when you run the effects, whereas with monad transformers you have to refactor all your code to use `State<S, Result<E, T>>` instead of `Result<E, State<S, T>>` or vice-versa
You can recover the ability to reorder effects by using MTL-style type classes, so you could write that as M<T> where M: MonadState<S> + MonadError<E>, in rust-ish syntax. But that makes the number of trait/typeclass implementation for each transformer explode (given a trait and a type for each transformer, it's O(N^2)), whereas algebraic effect systems don't really have that issue. I also have a hunch that algebraic effects(or, well, delimited continuations in general) are probably easier to optimize than monad transformers, too.
Specifically, this online book (mentioned in that discussion) may be a good resource, I've used author's content as a reference several times: https://www.metalevel.at/prolog