The longer I’m in this industry, the more I find that there are two types of programmers: those who default to writing every program procedurally and those who default to doing so declaratively.
The former like to brag about how quickly they can go from zero to a working solution. The latter brag about how their solutions have fewer bugs, need less maintenance, and are easier to refactor.
I am squarely in the latter camp. I like strong and capable type systems that constrain the space so much that—like you say—the implementation is usually rote. I like DSLs that allow you to describe the solution and have the details implemented for you.
I personally think it’s crazy how much of the industry tends toward the former. Yes there are some domains where going the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements. But so much more of our time and energy is spent maintaining code than writing it in the first place that upfront work like defining and relating types rapidly pays dividends.
I have multiple products in production at $JOB that have survived nearly a decade without requiring active maintenance other than updating dependencies for vulnerabilities. They have had a new version deployed maybe 3-5 times in their service lives and will likely stay around for another five years to come. Being able to build something once and not having to constantly fix it is a superpower.
> Yes there are some domains where going from the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements
I agree with your observations, but I'd suggest it's not so much about domain (though I see where you're coming from and don't disagree), but about volatility and the business lifecycle in your particular codebase.
Early on in a startup you definintely need to optimize for speed of finding product-market fit. But if you are successful then you are saddled with maintenance, and when that happens you want a more constrained code base that is easier to reason about. The code base has to survive across that transition, so what do you do?
Personally, I think overly restrictive approaches will kill you before you have traction. The scrappy shoot-from-the-hip startup on Rails will beat the Haskell code craftsmen 99 out of 100 times. What happens next though? If you go from 10 to 100 to 1000 engineers with the same approach, legibility and development velocity will fall off a cliff really quickly. At some point (pretty quickly) stability and maintainability become critical factors that impact speed of delivery. This is where maturity comes in—it's not about some ideal engineering approach, it's about recognition that software exists to serve a real world goal, and how you optimize that depends not only on the state of your code base but also the state of your customers and the business conditions that you are operating in. A lot of us became software engineers because we appreciate the concreteness of technical concerns and wanted to avoid the messiness of human considations and social dynamics, but ultimately those are where the value is delivered, and we can't justify our paychecks without recognizing that.
Sure it’s important for startups to find market traction. But startups aren’t the majority of software, and even startups frequently have to build supporting services that have pretty well-known requirements by the time they’re being built.
We way overindex on the first month or even week of development and pay the cost of it for years and years thereafter.
I'm not convinced that this argument holds at all. Writing good code doesn't take much more time than writing crap code, it might not take any more time at all when you account for debugging and such. It might be flat out faster.
If you always maintain a high standard you get better and faster at doing it right and it stops making sense to think of doing it differently as a worthwhile tradeoff.
Is it worth spending a bit more time up-front, hoping to prevent refactoring later, or is it better to build a buggy version then improve it?
I like thinking with pen-and-paper diagrams; I don't enjoy the mechanics of code editing. So I lean toward upfront planning.
I think you're right but it's hard to know for sure. Has anyone studied software methodologies for time taken to build $X? That seems like a beast of an experimental design, but I'd love to see.
I personally don't actually see it as a project management issue so much as a developer issue. Maybe I'm lucky but in the projects I've worked, a project manager generally doesn't get involved in how I do my job. Maybe a tech lead or something lays down some ground rules like test requirements etc but at the end of the day it's a team effort, we review each other's code and help each other maintain a high quality.
I think you'd be hard pressed to find a team that lacks this kind of cooperation and maintains consistently high quality, regardless of what some nontechnical project manager says or does.
It's also an individual effort to build the knowledge and skill required to produce quality code, especially when nobody else takes responsibility of the architectural structure of a codebase, as is often the case in my experience.
I think that in order to keep a codebase clean you have to have a person who takes ownership of the code as a whole, has plans for how it should evolve etc. API surfaces as well as lower level implementation details. You either have a head chef or you have too many cooks, there's not a lot of middle ground in my opinion.
I hear you, and agree there’s not much overhead in basic quality, but it’s a bit of a strawman rebuttal to my point. The fact is that the best code is code that is fit for purpose and requirements. But what happens when requirements change? If you can anticipate those changes then you can make implementation decisions that make those changes easier, but if you guess wrong then you may actually make things worse by over-engineering.
To make things more complicated, programmers need practice to become fluent and efficient with any particular best practice. So you need investment in those practices in order for the cost to be acceptable. But some of those things are context dependent. You wouldn’t want to run consumer app development the way you run NASA rover development because in the former case the customer feedback loop is far more important than being completely bug free.
I always try to design for current requirements. When requirements change I refactor if necessary. I don't try to predict future requirements but if I know them in advance I'll design for them where necessary.
I try to design the code in a modular way. Instead of trying to predict future requirements I just try to keep everything decoupled and clean so I can easily make arbitrary changes in the future. Some times a new requirement might force me to make large changes to existing code, but most often it just means adding some new stuff or replacing something existing that I've already made easy to replace.
For example I almost always make an adapter or similar for third-party dependencies. I will have one class where I interact with the api/client library/whatever, I will avoid taking dependencies on that library anywhere else in my code so if I ever need to change it I'll just update/replace that one class and the rest of my code remains the same.
I've had issues in codebases where someone else doesn't do that - they'll use some third-party library in multiple different components and practically make the data classes of that library part of their own domain and have workarounds for the library's shortcomings all over the place so when we need to replace it or an update contains breaking changes or something like that it's a big deal.
There's a lot of things like this you can do that don't really take much extra time but makes your code a lot simpler to work with in general, makes it a lot easier to change things later etc. It has lots of benefits even if the library never gets breaking changes or needs to be replaced.
Same thing for databases, I'll have a repository that exposes actions like create, update, delete etc and if we ever need to use a different db or whatever it's easy. Just make a new repository implementation, hook it up and you're done. No SQL statements anywhere else, no dependency on ORMs anywhere else, I have one place for that stuff.
When I organize a project this way I find that nearly every future change i need to make is fairly trivial. It's mostly just adding new things and I have a place for everything already so I don't even need to spend energy thinking about where it belongs or whatever - I already made that decision.
Well said. This summarizes my experience quite succinctly. Many an engineer fails to understand the importance of distinguishing between the different tempo and the immediate vs long-term goals.
A strong type system is your knowledge about the world, or more precisely, your modeled knowledge about what this world is or contains - the focus is more on data structures and data types, and that's as declarative as it can get with programming languages(?). I'd also call it to be holistic.
A procedural approach focussed more on how this world should be transformed - through the use of conditional branching and algorithms. The focus feels to be less on circumstances of this world, but more to be on temporary conditions of micro-states (if that makes any sense). I'd would call it to be reductionistic.
I love strong types. I love for loops. I love stacks.
GP! Try Rust. Imperative programming isn’t orthogonal to types. You can go hard in Rust. (I loved experimenting with it but I like GC)
GP! Try data driven design. Imperative programming isn’t orthogonal to declarative.
Real talk, show me any declarative game engine that’s worth looking at. The best ones are all imperative and data driven design is popular. Clearly imperative code has something going for it.
and the advantages aren’t strictly speed of development, but imperative can be clearer. It just depends.
I adore Rust. My point isn’t that you can’t have both, but that the two types of programmers have different default approaches to problem solving. One prefers to model the boundaries of domain as best they can (define what it should look like before implementing how it works), one prefers to do things procedurally (implement how it works and let “what it looks like” emerge as a natural result).
Neither is strongly wrong or right, better or worse. They have different strengths in different problem areas, though I do think we’ve swung far too hard toward the procedural approach in the last decade.
It's the difference between "how?" and "what?". A procedural approach describe the step you do something. But not what is the problem you want to solve and why you want to do this to solve it. A declarative approach on the other end, describe the goal and intended solution first and try to make a proper procedure to achieve the goal.
The two approach have their own cons and pros. But aren't explicitly exclusive. Sometimes the goal and solution aren't that clear. So you do it procedurally until you find a POC(Proof of concepts) that may actually solve the problem. And refine it in a declarative way.
I think GPs point is that they haven't gone from zero to a working solution, they've gone from zero to N% towards a working solution and then slowed down everyone else. Maybe for the most trivial programs they can actually reach a solution.
You can't write a program without knowing that x is a string or a number, your only choice is whether you document that or not.
Yes you can, you handle every case equally. You don’t even need the reflection mechanisms to be visible to the user with a good type system. A good type system participates in codegen.
for a really simple example: languages which allow narrowing a numeric to a float, but also let you interpolate either into a string without knowing which you have.
A statically typed Console.log in JS/TS would be an unnecessary annoyance.
I think TypeScript is part of the problem here. It's a thin layer atop a dynamically typed language with giant escape hatches and holes. I think it's great if you're stuck in JS, it's so much better than JS, but I can't think why anyone would choose it compared to a "real" statically typed language.
It is actually a rather hard question. There is a web page somewhere where the author asks it, lists possible answers and get amazed by some of the definitions, such as "declarative is parallelizable". Cannot find it now, unfortunately.
I would say that imperative is the one that does computation in steps so that one can at each step decide what to do next. Declarative normally lacks this step-like quality. There are non-languages that consist solely of steps (e.g. macros in some tools that allow to record a sequence of steps), but while this is indeed imperative, this is not programming.
One side cares more about how the solution is implemented. They put a lot of focus on the stuff inside functions: this happens, then that happens, then the next thing happens.
The other side cares more about the outside of functions. The function declarations themselves. The types they invoke and how they relate to one another. The way data flows between parts of the program, and the constraints at each of those phases.
Obviously a program must contain both. Some languages only let you do so much in the type system and everything else needs to be done procedurally. Some languages let you encode so much into the structure of the program that by the time you go to write the implementations they’re trivial.
You don't even need a loop. Steps, conditions, and a 'goto'. Loop are actually a design mistake. They try to bound 'goto' by making it structured. They are declarative, by the way. As a special case or even as a common case they are fine, but not when they try to completely banish 'goto'. They are strictly secondary.
Similarly declarative programming is strictly secondary to imperative. It is a limited form of imperative that codifies some good patterns and turns them into a structure. But it also makes it hard or impossible not to use these patterns.
I am also squarely declarative, but currently use a language for work that forces me to be procedural pretty much always and it kinda sucks. My code always feels bad to me and the cognitive load is always super high
Is it the language that forces procedural code? In my experience it’s usually the stdlib, but the language itself is capable of declarative constructs outside of existing APIs. If that’s the case, an approach like “functional core, imperative shell” is often a good one. You can treat the stdlib like it’s any other external API, and walk it off as such.
There is no stdlib. Its a very specific proprietary purpose built language thats been around since like the 90s. It has a super limit set of standard functions that operate on an underlying proprietary data structure and every thing else is just a thin vaneer over a very limited set of c functions.
> I personally think it’s crazy how much of the industry tends toward the former.
It's because most people who use technology literally don't care how it works. They have real, physical problems in the real world that need to be solved and they only care if the piece of technology they use gives them the right answer. That's it. Literally everything programmers care about means nothing to the average person. They just want the answer. They might care about performance if they have to click the same button enough times, and maybe care about bugs if it's something that is constantly in their face. But just working is enough...
I'm thinking more along the lines of how scripting languages are often used in, say, scientific domains (Python, R, etc...). Or how JavaScript and Ruby are more popular than, say, Rust and Haskell for startups.
"Poorly typed" means different things to different people, in the context of this article and thread it would probably mean weakly typed or dynamically typed? Which has nothing to do at all with the correctness of a formula or what output a program will produce.
Declarative programming is essentially programming through a parameter. The declaration is that parameter that will be passed to some instruction. In small doses declarative programming occurs with every function call. In declarative programming the parameter is essentially the whole program and the instruction is implicit; we know more or less how it works, but generally assume it just exists or even forget about it and take it as the way things work.
Of course declarative programming is simpler and less error prone. But it is also essentially inflexible. The implicit instruction is finite and will inevitably run into a situation when the baked execution logic does not quite fit. It will be either inefficient or require a verbose and repetitive parameter, or just flat out incapable of doing what is desired. In this case declarative programming fails; it is impossible to fix unless we rewrite the underlying instruction.
E.g. 'printf' is a small example of declarative programming. It does work rather well, especially when the compiler is smart about type checks, but once you want to vary the text conditionally it fails. (The thing that replaces 'printf' are template engines that basically reimplement same logic and control statements you already have in any language and the engine works as an interpreter of that logic. The logic is rather crude and limited and the finer details of formatting are left to callbacks that are mostly procedural.) For example, how do I format a list so that I get "A" for 1, "A and A" for 2, and "A, A, and A" for more? Or how I format a number so that the thousand separator appears only if the number is greater than 9999? Or what to do if I have an UTF-8 output, but some strings I need to handle are UTF-16? The existing declarative way did not foresee these cases and to add them to the current model would complicate it substantially. But if I have a simple writer that writes basically numbers and strings I can very quickly write procedures for these specific cases.
Instructions are primary by their nature. A piece of data on its own cannot do anything. It always has an implicit instruction that will handle it. So instructions are the things we have to master.
The former like to brag about how quickly they can go from zero to a working solution. The latter brag about how their solutions have fewer bugs, need less maintenance, and are easier to refactor.
I am squarely in the latter camp. I like strong and capable type systems that constrain the space so much that—like you say—the implementation is usually rote. I like DSLs that allow you to describe the solution and have the details implemented for you.
I personally think it’s crazy how much of the industry tends toward the former. Yes there are some domains where going the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements. But so much more of our time and energy is spent maintaining code than writing it in the first place that upfront work like defining and relating types rapidly pays dividends.
I have multiple products in production at $JOB that have survived nearly a decade without requiring active maintenance other than updating dependencies for vulnerabilities. They have had a new version deployed maybe 3-5 times in their service lives and will likely stay around for another five years to come. Being able to build something once and not having to constantly fix it is a superpower.