I think this is a fabrication by the journalist. Overall, it seems to me that there was an ideological agenda behind this story. In the USSR, no topic could cause a stir in major media outlets without an ideological directive.
The family would have heard about airplanes, they were from Perm Oblast, which isn't exactly remote. It isn't exactly a big leap to think that what they were seeing was the lights of airplanes. Ships carry lights for safe navigation, and if you fly at night you also carry lights for safety.
So, it's impossible to transfer a value from one machine to another if there's no network connection between them? How did this extremely trivial observation become a well-known theorem?
So, when it was originally phrased, the primary thing that you would have learned about databases, was that they enable ACID transactions. (And if you were a practitioner you would have learned about the various isolation levels and dirty reads and dirty writes.)
But if you wanted to go from this level to implementation, typically you could get a prototype working, but it would be slow. When things are slow, the WD-40 of the programming world is to add a cache. And this is where you get the quip that there are only two hard problems in computing, cache invalidation and naming things. (The “and off-by-one errors” is a later addition.) The problem is that cache invalidation shows up as consistency bugs in your database. Someone does a rare combination of commits and rollbacks and some cache doesn't get wiped quite as fast as it needs to, or is wiped overoptimistically causing a pull from uncommitted data, and your isolation level has dropped to READ UNCOMMITTED.
The CAP theorem was originally raised as a conjecture, something like, “once you shard the database, I don't think there's any way to solve these cache problems without one of the replicas just going silent for arbitrarily long pauses while it tries to at least partially synchronize its cache with the other shards.” Phrased that way, you can understand why it was a conjecture, it relies on nobody at MIT having a super clever way to deal with caches and cleverly routing the synchronization around the sharding.
BUT, you can make that statement for many different reasons, and this was not for Pedagogical Reasons, its point was rather Evangelism! The author was attempting to introduce the idea of Eventual Consistency, and gain adoption by ditching all of the wisdom about ACID transactions. This antagonism was deliberate, eventual consistency became the E in a rival acronym BASE. And so the argument was that we could explore a new corner of design-space.
It was later that someone decided they could prove it by coming up with a universal subgraph, “whatever connection you've got, it has to contain this: two nodes, fighting over one value, with a network connection possibly passing through other nodes but we can abstract all that away.” And then you have a proof, and then you have a bunch of people comparing the proof to the stated claims of various database vendors, and finding that over and over they claim to be both shardable with high availability among the shards, and to support ACID transactions that keep everything consistent. It turns out those statements are usually made assuming a happy path!
(You also get Paxos and Raft, “here is how to get consistency without arbitrary latency on two-phase commit via majority vote”, and the Jepsen blog “you said this had consistency level X, let’s fuzz it and see if we can generate a counterexample”, and some interesting exceptions like Datomic saying “this one part is not scalable and it's a single point of failure to sacrifice P for the CAP theorem’s sake, but in exchange we can simplify our C and A guarantees so that you can scale the reads of the system consistently.”)
I am developing an online IDE capable of visualizing a program's call tree. Within this IDE, users can select a node in the call tree to observe detailed information, including function arguments, return values, local variables, and intermediate expressions. Additionally, the IDE features a time-travel engine that displays the values of mutable objects at the specific moment when a particular function and statement were executed.
>> you can use print statements and step debugging, together, at any point in time in the recording
I am working on something similar. I am building an IDE that allows to jump from the line printed by a console.log call to a corresponding code location, and observe variables and intermediate expressions.
It also displays a dynamic calltree of a program, allowing to navigate it in a time-travel manner.
Currently it only supports pure functional subset of JavaScript, but I am working on support for imperative programming (mutating data in place).
There was a similar tool developed by Oracle called mod_plsql, which served as an Apache web server module. As far as I remember, it allowed you to configure a mapping from a URL to a stored procedure. These stored procedures could receive HTTP request parameters and return HTML content.
I've always felt that this approach is the right way to build applications. Application servers seemed like an unnecessary layer to me, essentially serving as intermediaries that merely passed data from the database to the browser. In the past, they played a more critical role in generating HTML, but nowadays, application servers are primarily used for handling APIs. Consequently, they often lack meaningful tasks to justify their existence.
Having your code closely integrated with the data also has the benefit of improving performance.
Yes I believe this is often used as part of Oracle's APEX (Application Expresss) tool which has similar goals to Omnigres. It's used to put together internal business CRUD and reporting apps very quickly for some orgs I've worked for.
Leporello.js is an interactive functional programming environment designed for pure functional subset of JavaScript. It executes code instantly as you type and displays results next to it. Leporello.js also features an omnipresent debugger. Just position your cursor on any line or select any expression, and immediately see its value. Leporello.js visualizes a dynamic call tree of your program. Thanks to the data immutability in functional programming, it allows you to navigate the call tree both forward and backward, offering a time-travel-like experience.
>> How does it deal with patterns from popular frameworks or libraries that are borderline frameworks. React, Vue, Express, NestJS etc.
React is a great fit for Leporello.js. There is an example of React TODO app that you can write and debug in Leporello. It is showcased in the video. You can play with it yourself if follow the link https://app.leporello.tech/?example=todos-preact
Speaking about backend frameworks, basically you code is a function Request -> Response. How do you organize your code is up to you. You can code it in a functional manner. What is great about Leporello, is that it remembers all the calls your app made to databases, other microservices and external APIs and allows to debug them in a time-travel manner, seeing requests and responses. You can run your code once, and then debug and navigate it forward and backward, seeing runtime values that were generated when your code was executed. It a huge time saving, especially when external resources are slow and may require complex setup or teardown before or after being called.
>> It feels like leporello has to store a lot of possible branch-outs of the state
Could you please clarify, what do you mean by branch-outs of the state?