Hacker News new | past | comments | ask | show | jobs | submit | mfateev's comments login

I don't believe in visual programming ever replacing code for general programming. However, many examples exist when visual programs work for narrow domain-specific applications. This works because such applications allow exposing only high-level domain-specific abstractions. This reduces complexity enough to be a good fit for visual representation.


Nah, I wouldn't be too sure. Each generation introduces their own new abstractions on top of whatever existed before. Any code that was written before their era is outdated legacy unreadable stuff, and it can only be fixed by writing new shiny modern blazing fast rocketemoji stuff. It doesn't matter that existing code already works, it must be changed anyway, and we all know that change==progress.

Maybe it won't happen during the next decades. Most likely it won't happen until all currently-alive maintainers of Linux "retire", and most programmers alive are from generations that grew up being pressured to use either Rust or React.

So I can easily imagine a future where there's two main camps: low level code in Rust (where "low level" now means "anything up to, and including, the web browser"); and high level code in some graphical thingy that compiles down to WASM.

And the path I see where we'll reach that point, is by the Rust community continuing to do their thing, and by SaaS companies continuing to do their thing (VM->Docker->"Serverless"[1]->"Codeless"[2]->"Textless"[3]).

With all that, I think it's possible that visual programming might become the dominant way of doing general programming. Not the only way (after all COBOL is still a thing today, so it would be like that), but I can see how what's "normal" is shifted up one level of abstraction higher, so that "code" in that future is seen the same way as "assembly" today; or "assembly" in that future is seen the same way as "punch cards" today.

Those old timers think their Rust language is good enough with their memory safety and stuff, and prefer to ignore decades of progress on programming. Separating the code in text files? Yeah no wonder they keep having incidents like the DeseCRATE or Cargottem supply chain attacks (years 2038 and 2060 respectively), if they install dependencies willy-nilly without even looking at the code they're bringing in (nobody wants to look at dependencies with that primitive tooling). If they used modern tooling instead, they would be able to easily tell at a glance which "nodes"[4] are trying to do suspicious stuff just by their location or relationships with other nodes; or even restrict network or filesystem access to a region of the canvas. They say "well, just use capabilities", but then the code looks like an unreadable mess and way too error prone, when modern tooling lets you just draw a square on a canvas and "any node[4] inside the square can read filesystem, everything outside it can't", without needing to modify any of the nodes[4] themselves. It eliminates whole types of vulnerabilities and whole types of human errors.

(/s, mostly)

[1]: Still needs servers, but you don't worry about it because servers are too low level and you only want to focus on the important stuff (just code).

[2]: Still needs code, but you don't worry about it because code is too low level and you only want to focus on the important stuff (just English).

[3]: Still needs text, but you don't worry about it because text is too low level and you only want to focus on the important stuff (just visual concepts).

[4]: Module, function, macro, etc.


Do you know about the Temporal startup program? It gives enough credits to offset support fees for 2 years. https://temporal.io/startup


I know its gonna sound entitled. But even though we are a small company we still process a lot of events from third parties. Temporal cloud pricing is based on number of actions, 2400 bucks would only cover some months in our case.


If you are expecting to still be small after 2 years that just delays the expense until you are locked in?


temporal.io just released .NET SDK. The observability and scalability of the platform is really good.

Disclaimer: I'm one of the founders of the project.


Nice to see you dropping in Maxim!

For the GP poster - I agree with Maxim here. We've been evaluating workflow orchestration and durable function systems for a while and finally whittled down to where we think we're going to pull the trigger on either Azure Durable Functions or Temporal. Temporal is really nice - the fact that you are "just writing code" is such a huge bonus over some other stuff like AWS Step Functions, Cadence, and Conductor.

As an aside, the engineering/sales engineering team over there seems top notch.


What I meant specifically is that the current state of a workflow is stored in a format that’s opaque to any component other than the workflow itself.

E.g. if I have a “shopping cart checkout” workflow and the user is not making progress, how can I can I tell which step of the workflow the user is stuck at?


Every step of the workflow is durably recorded. So you have the full information about the exact state of each workflow. To troubleshoot, you can even download the event history and replay workflow in a debugger as many times as needed.

The ease of troubleshooting is one of the frequently cited benefits of the approach.

Check the UI screenshot at https://www.temporal.io/how-it-works.


The function's event data and current state is all stored in table storage, so you could query that - I'd expect you'd need to query an event-store-based solution in a similar way?


Check out temporal.io. It has support for schedules as well.


Check out temporal.io that fully abstract this. Disclaimer, I'm one of the founders.


hah, I was thinking about temporal as I was writing this. I have played with temporal pretty extensively.


State machines are useful when the same input/event requires different handling based on the current state. There are not that many applications when this is true. Most of the time only two handlers in each state are needed, success and failure, which are much better modeled through a normal code than an explicit state machine.

At the framework level they might be pretty useful, but they rarely appear at the first version, but as a result of refactoring.


Yes, Temporal workflows are as dynamic as needed.

The other useful pattern is always running workflows that can be used to model lifecycle of various entities. For example you can have an always running workflow per customer which would manage its service subscription and other customer related features.


The main difference is that workflows are written as code in a general purpose programming language. Java, Go, Javascript/Typescript and PHP are already supported. Python and .NET are under development. AWS Step Functions are using JSON to specify workflow logic. JSON is OK for very simple scenarios, but is not adequate for the majority real business use cases. The fun fact is that Step Functions are a thin layer on top of AWS SWF which is based on the same idea as Temporal.

Here is a more detailed answer from Temporal forum: https://community.temporal.io/t/why-use-temporal-over-a-comb...


Look at the temporal.io. It is essentially a BPM system that uses Go (as well as Java/PHP/Typescript) to specify business process. And a queue + DB is never simpler for such scenarios.

Disclaimer: I'm one of the founders of the project.


I think queues are the wrong abstraction to model business processes. That's why a trivial issue like a non recoverable failure during processing a message becomes such a headache. The same goes for ordering. An orchestrator like temporal.io allows modeling your business use case using higher level abstractions that hide all this low level complexity.

Disclaimer: I'm the tech lead of the temporal.io open source project and the CEO of the affiliated company.


It is a problem only if you are mixing up application layers.

If you keep your queueing system and business process as separate layers with queueing system serving only as a means of transporting business events then you can make it all to work correctly.

Think in terms of IP protocol (as in TCP/IP). It is unsuitable for transmitting financial transactions. Yet, financial transactions can be made to work on top of it if you separate the layers and treat IP only as a component mechanism of getting data from A to B.


I think we are in agreement here. Temporal does exactly what you described. It uses queues to transporting tasks to processes. But it completely hides them from the business process code.

The issue is that 99.9% of developers use queues directly in their business applications.


Hiding this complexity is useful if it also means handling it. What are the key patterns you apply in temporal to hide it? I’ve had a look at temporal and find it really interesting.


Instead of directly using queues in a Temporal Workflow, the Workflow (which is written with plain code), schedules an Activity that the system is responsible for, behind the scenes, the Activity is just an item put on a queue. Activities have retry policies which are also handled by the system. If an Activity attempt fails and should not retry according to the policy, an exception is thrown in the Workflow to be handled using code.

Using the TypeScript SDK, you can catch that exception here: https://github.com/temporalio/samples-typescript/blob/9d9108...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: