I don't think this is a very popular opinion, but for those shipping software, consider having at least one programmatic way to "monkey patch" it. These are things like a plugin API, dynamically linking it, or using a language with an exposed runtime–things that allow programmatic injection of code, so that users who really care about tweaking the tool can do so. It doesn't even take much effort, either: my text editor has no plugin interface, but one of the reasons I stick with it is because I can hook its calls to libc to customize some of the things I don't like. My preferred mail client happens to lack a certain entitlement that most people don't even know about and that enables an entire cottage industry of software for it. Browsers let you specify some JavaScript to run on each page and suddenly the balance of power on the web tilts significantly towards the user. Even (or especially) if your tool is open source, consider supporting such an interface: it's much nicer to add on to piece of software than fork it.
The problem is support. Even if you put big caution tape around it, if you break it or change it in a way that isn't backwards compatible users will complain loudly about it. And if you have users paying lots of $$$ relying on it, it might force your hand.
Building a plug-able system in a way that doesn't blow up in your face can be hard. Firefox got a lot of bad press for redesigning their plugin system.
Arguably, it's not just about support: It's often just not a good idea, complicating the existing product, still only gives you pre-defined hooks.
Jenkins is one of my favorite examples for why plugin systems are evil. As users accumulate plugins, it all becomes a giant, unmaintanable monolith that ends up unable to perform even the most basic tasks due to some unrelated/uninteresting/forgotten plugin somewhere doing something weird.
Instead of stuffing everything into a monolith with plugins, split it out into individual composeable units with minimal features.
Here is a thought experiment for your preference of “individual units with minimal features”: How do you take an artist’s toolbox app (Photoshop, Illustrator,
Blender, Painter, Procreate, Clip Studio, etc) and decompose it into individual units that can still be interacted with via a stylus as the primary UI?
While I was primarily referring to things with a backend, trying to redesign these "everything but the kitchen sink" apps is actually an interesting task.
First of all, Photoshop isn't a toolbox. It's a tool, but with plugins. The real-world analogy would be a KitchenAid kitchen machine, where you can attach awkward tumors like a spaghetti machine or entire food processor to their power take off. The kind of accessories that, after purchase, make you look long and hard in the mirror, trying to figure out just when it happened that your life fell apart.
A toolbox, on the other hand, is just a vehicle for making tools easily available. That, and being a blunt weapon in emergencies.
If I were to imagine a universal image toolbox, I'd imagine it as system that can visualize a stack of buffers. Then, it would be able to launch other applications, providing access an API to access and manipulate the buffer stack. That's all.
These applications can provide your usual floating toolbox UI's just fine, and nothing should stop it from having the same UX as Photoshop, but plugin-less. Depending on things, these tools may use usable standalone, for bulk use, command-line use, from different toolboxes, whatever. You'd be able to swap out features that are otherwise Photoshop built-ins, as they're just another application here.
This distinction between plugins (i.e. extendable tool) vs. toolbox (isolated tools) is important: Instead of awkward tools designed entirely around their ability to attach awkwardly to a KitchenAid, you get a tool designed around doing its job right.
(That KitchenAid accessory metaphor works a lot better than I initially anticipated—the PTO is the annoyingly limited hooks you have to work around, and the results are as awkward as any plugin system.)
> As users accumulate plugins, it all becomes a giant, unmaintanable monolith that ends up unable to perform even the most basic tasks
I agree that most of the time it's best to give users only one way of doing things, so that they don't get creative and go down a rabbit hole.
Nevertheless, if a product targets a specific type of user, which is expected to be knowledged about and follow certain best practices while using it, making the product more customizable can be a competitive advantage.
We can't always blame the product for being "too complex" if it's purposely designed that way so that well-read users can take the most out of it.
Sure, I think that's a market to target. But it sounds like it's a slog to develop for and often a slog to use (or at least set up).
Often the opinions made when designing the software is part of what you're selling. Otherwise, I just ship you a box of 1s and 0s and you can put it together yourself =)
I just attached a desktop PC to mine... so now it’s more complicated and does everything I want. =P
Doesn’t require much complexity from the TV though.
Jokes aside, sometimes simplicity is judged on different criteria. Everyone’s favorite example is Craigslist (ugly, but actually quite simple once you grok it), but I can name a few others where the actual criteria for simplicity can be hard to define.
Bash, for example. It takes a while before you really realize what the scripting can actually do and what Unix pipes can do. Once you see it, it makes a ton of sense, but until you grok the abstractions, it's just a pile of syntactic mush you can't remember. It's simple for what it does, but the problem itself has a lot of complexity.
I believe the same thing about Vim. It requires training because while its modes become simple after you start to get it, it's an irritatingly arduous road to get there. So from the perspective of a lay user... not simple at all. From the perspective of an expert, it's effective to the point of religion.
True. SmartTV's may appear simple, but I am fairly certain that their simplicity crumbles for most users quickly.
Say, once a user just wants to play games on a console and has to wait for a TV to boot and prompt you to install updates, when a user wants to use AppleTV instead of Android TV, or when they want to upgrade the smarts.
I think the man reason people think this is simpler is that manufacturers force it down consumers' throats as this setup sells more units due to planned obsolescence.
It's not about making each macro-service into 1000 micro-services. It's about making your macro-services smaller, instead of all-consuming monoliths.
The problem is that monoliths will never be hackable. A LEGO-kit is hackable. A monolith just have hooks and connectors, but the product itself will always be static.
You can get things shipped as pre-assembled LEGO kits (both metaphoric or not)?
An interesting comparison to Jenkins in the same space is Buildbot. Which embraces being written in a scripting language, and is "configured" by writing code that describes the builds, using existing primitives and extending or replacing them where necessary.
This is a great point and something to consider when designing runtime programmable tools. It should mean moving slow, cautious with minimal surface area and opinions about the future is probably a way to go. Having strong isolation is probably good too. This is something that made me uneasy about highly programmable/hackable platforms. I like when the core maintainers understand _how_ a system wants to be extended and shaped and provides precise hooks for doing so.
Interesting you mention Firefox because, when you think about it, a web browser (just like an operating system) is a massive pluggable system. They changed the plugin system, but they haven't (as far as I know) broken web standards, so that 20 year old websites still work today.
It seems to me the key then would be to have that programmable interface be used to actually implement the features shipped as part of the project. It will be a lot easier to get the API right if the developers are using it themselves, plus that adds a lot more incentive to maintain backwards compatibility as well.
Who actually likes DSLs and why? You introduce the complexity of a lexer and parser in the form of an abstraction on top of an API. What does that buy you?
In my experience, it just makes workflows more difficult since you now need to create custom editors, linters, formatters, testing frameworks, etc. You need a whole new ecosystem before you can truly become productive on top of learning an entirely new syntax.
Mintdata looks cool, what does the imperative layer that drives the declarative layer look like? It'd be nice if this was a platform where engineers could sell the imperative layers
We've been using our own "Visual SDK" to build large parts of MintData for the past few years.
It's 100% written in TypeScript/React, using decorators (cough: annotations!) and allows us to:
1) add new spreadsheet functions in TypeScript
2) wrap any React component
3) we get the auto-magic property panel UI "for free" & it instantly talks to/from the spreadsheet
For our cloud/backend imperative layer:
4) we use our own real-time stream processor (think: Apache Storm but minus the Clojure stacktraces)
5) the SDK lets us quickly write flow blocks in Java using annotations & connect to all sorts of enterprise-y systems (SQL/NoSQL stores, Queues, File Systems, etc)
@slifin -- in terms of engineers writing & selling those layers, we're open to it -- ping me if you have ideas & we can give you access to the docs & SDK.
--
[1] I'd put a link here, but the flamethrowers have come out on this thread :)
Looks pretty cool; IMHO it's a good idea. (I once participated as a consultant / dev on a project to convert a Byzantine set of Excel spreadsheets into a deployable application built on web technologies, and this might have been a saner approach.)
Before the others come with torches, I’ll point out we turned on public/free access recently, would be happy to hear your thoughts/feedback on the MintData approach.
The user should have the power. Also the interface should be as powerful as possible. One such browser that embraces this philosophy is Next https://github.com/atlas-engineer/next. Source: biased author
Very cool! I'll show this to our team [1]; we have a few keyboard ninjas & I think they'll absolutely love it.
--
[1] team of https://mintdata.com, where we've been known to appreciate a fine keyboard shortcut or 12.
[2] partly an inside joke to our team who reads HN, since we recently shipped keyboard shortcuts for our top menu (Google Sheets, we borrowed what we could to ease the pain :D )
This is why I believe it to be an "unpopular opinion" these days as most computing platforms move towards being more locked down and less extensible; for security or complexity reasons these kinds of things seem to get little love or are even axed completely. Usually, the hard part is not security per se, but identifying user intent. You want to come up with a process that ensures that the user who enables potentially dangerous interfaces knows what they are doing. Preventing the user from doing this entirely is an easy but fairly disappointing "solution" to this problem.
> Preventing the user from doing this entirely is an easy but fairly disappointing "solution" to this problem.
For sure. And if individuals are happy with trading some security for some functionality/flexibility in their software, I'll support their right to make that choice.
It gets more complex when the people affected by that choice aren't the people making the choice. Which is more likely in the case of network programs.
> Usually, the hard part is not security per se, but identifying user intent. You want to come up with a process that ensures that the user who enables potentially dangerous interfaces knows what they are doing.
Indeed. I'd add identifying developer intent to that as well. What functionality am I trying to enable v.s. what else am I actually enabling? Coming up with that process is the herculean task there.
Usually in the case of network programs you want users to have a way to control their view of the shared backend/database/whatever, which means providing things like REST APIs and hackable clients.
"Those with nefarious purposes find them extremely useful too"
Whilst I don't disagree with this point, it has all to often been used as an argument against open software, and ultimately leads to more closed and less secure software.
It’s challenging to do, but there is something to be said for having features you don’t advertise.
You don’t have to make them work for everyone. They don’t have to be safe in all possible scenarios. But they are there for Support to offer to users who are caught in a bind and need a solution.
The about: box is a bit like this. It’s there, if you really need it, but they aren’t going to flood their menus for all the 2% cases.
It exists for software that considers itself a platform in the form plugins, themes and such. And it’s very different, much harder, software play for support and reputation management.
I think it's a pipe dream. Non-experts making changes to a system an expert wrote will yield either: 1) very few little changes, such as changing the color of an item, etc. Or 2) disasters where it's better to throw away the changes and start over doing it "the right way".
Just look at automobiles: They are malleable indeed. We've all probably seen pics of modifications of all types, from the hillbilly type VM bug/truck combination to the amazing Cobra replica that few can tell from the real thing. In all cases, the craftsmanship level (and budget) of the person doing the modification directly affects the quality of the result. And how many cars are there whose modifications simply failed and the car never hit the road again?
The same in software. Give Annie in accounting the ability to modify the inventory system and you will have a disaster in your hands, especially if she does not know much about the architecture of the system and/or she's not a matter expert on inventory methods and procedures. In that case I'd much rather trust my company to a hard-to-change system than one where everyone can mess with and break it beyond repair.
I think the false premise here is that non-experts can build on the work done by experts. And I'm talking about both experts in programming who built the system in the first place (ie, add shitty architecture on top of good architecture), and experts in the domain the system is solving. If you get a smart your developer who is a solid coder to make changes to the aforemetioned inventory system, he will probably break the system (at least in the edge cases), albeit with beautiful, readable code.
I just don't see how it's possible unless the ability to expand the system is severely limited (ie, not a "malleable system").
> I think the false premise here is that non-experts can build on the work done by experts.
I'll take a different side to that argument - it is dependent on the expert to establish the bounds and rules of change to a system. How a system naturally grows from people with less context is one of the architectural traits of the system that can be traded on.
I have often seen systems which can ONLY be worked on by the so-called "expert" who wrote it. Even when you hire a new "expert" level person to work on it, it takes forever and success is not gauranteed.
On the other end - a system that actively works against doing things the wrong way. It has an established core idea with clear boundaries between the roles of interlocking parts. For the intended usecase of the system, it is as extensible as it needs to be, while still remaining within the bounds of the intended usecase. It does now allow a vacuum cleaner to become a microwave oven, but it does let you add custom attachments, modify and swap out drive systems, adjust suction power, and change body style and hoses for the vacuum. Those can absolutely be done by non-experts, and the system is still very malleable.
Think of all the code on github for libraries people use every day that is added by non-experts, or hobbyists, or people who are new.
Emacs makes a pretty good example from my perspective, but if you haven't used it I suppose it makes little sense. Perhaps AutoCAD from what I understand, but I don't have enough experience with extending it to say for sure.
It's not that you have to extend it yourself, the point is to make it possible.
Annie in accounting (can't believe you got away clean with that one :) might ask someone else in the company to help her add a shortcut or whatever, or the company might hire an outside consultant.
There's an entire gradient from non-technical to expert that's effectively shut out from modifying most software.
Now I'm as guilty as anyone, getting a client to pay for an advantage they can't imagine the consequences of isn't happening much. But I can certainly see the value.
Jokes aside, imagine if Annie in accounting has the ability to define her own malleable tools, on top of a solid/robust system for centrally managing data? (version, backup, secure via IAM, etc)
Or will we really be stuck writing procedural code for the next 20, 30, 50 years?
--
[1] Caveat: chief hillbilly of https://mintdata.com here, where we think you really can have your cake (create expressive tools) and eat it too (centrally control, manage, and version your data)
.
Sure, but that's not "modifying the original system" any more. That's taking the original system (original abstraction) as a fixed/static foundation, and then expressing things in terms of it.
Which, I mean, if the tooling you've made is Turing-complete (Excel, Unix) then you can certainly say that the person working on top of your system is "programming"; but they're not programming your system. They're not writing plugins that interface with it on the same terms that its components interface with one-another (as you would be if you e.g. wrote your own POSIX shell utilities in C); they're trapped "above" that abstraction, in a sandbox, one from which they can only access the narrow subset of the API surface that you explicitly chose to expose to them.
Let me put it this way: you can get pretty far using e.g. Postgres as a custom data platform, by defining custom functions and types. But at some point, you'll need to write a Postgres extension. There's a big difference between a system that makes it easy for someone who's not a professional programmer to work on top of it (PG functions/types) and a system that makes it easy for someone who's not a professional programmer to extend it (PG extensions.) I've not yet seen a constructive proof that the latter is even possible.
I'll give you an example that mid-way [1] disproves the above.
Think of a typical "product tour" in a SaaS product. We're building the MintData product tour, in MintData itself. We then "publish" this "product tour application", and include it with the original blank design that a user gets when they first enter MintData.
So, in a way, our MintData onboarding is built in/on MintData itself, a bit like how you can have a bootstrapping compiler (one that compiles itself).
So, is the above us "modifying the original system"?
I think to some extent yes, although we have special spreadsheet functions that help jump the gap between Onboarding Application and Blank User Design (akin to the Postgres extensions [2] above).
So I think it is possible to build a system that allows you to then customize the system's own behavior (earlier versions of MintData could not build the onboarding experience, similarly to how the first compiler has to be built in a lower level language).
Genuinely curious -- derefr, do you agree or disagree with the above?
[2] As a person who was held at Grade-point and forced to write PostgreSQL C code to "modify the original [Postgres] system", I can only say it's an acquired taste :) Even Prof Franklin at UC Berkeley I think would back me up on this :D :D
Bootstrapping means only one thing: The entirety of a system is implemented within itself. This feat is binary: Only 100% means anything.
Your example is also very clearly not related in any way to "modifying the original system". Instead, it just means that you yourself have used project A (MintData) as a customer to make a distinct project B (an onboarding system that happens to be for MintData itself).
Modification is not assembling provided building blocks. Rather, it is changing the building blocks themselves. Say, you provide a HTTP JSON API, and I want to add support for websockets. Maybe I want to add entirely new data ingress/egress facilities. Or perhaps add a JavaScript interpreter to the backend to allow for rules and cron jobs written in JavaScript.
In your case, I see an ERP system which can be configured by users to do various things. From the perspective of the user, the original system is immutable, and they are no more able to change your product as a user than they are able to change Squarespace, GitHub, or for that matter Netflix and Spotify.
If you have navigated PostgreSQL C, you must surely understand the difference between configuration and modification.
---
General marketing trick: Use the product name less. It makes it stand out more to put it in one, well-placed spot. Use it too much (like above), and it instead becomes noise.
I think I get what you're talking about. Some systems are designed in such a way where the "platform" is formed in two layers: a low level, which exposes a set of primitives; and then a set of abstracting core libraries, implemented in terms of those primitives. Users are expected, idiomatically, to create business logic by making calls into the core libraries; but they're also free to call on the low-level primitives directly. In such a system, the "userland" sits directly on the primitives, with the core libraries as a sibling.
This is the pattern adopted by some, but not all, "runtimes." For example, Erlang has the low-level BEAM VM, and then has the Erlang "kernel" implemented as BEAM bytecode, rather than as native emulator support code. For another example, MOOs (object-oriented MUDs, e.g. LambdaMOO) only had the barest object-graph infrastructure specified in native code; everything else about the foundations of a MOO was defined in terms of objects and classes held in the MOO's state database.
In such systems, you have a sort of "intermediate" level of access to the native API surface, greater than the kind you have from the userland of a traditional VM or OS kernel.
Still, this "intermediate" level of access still doesn't allow you to break through the abstraction layer that the low-level primitives are founded upon. If there are any "complex" primitives implemented entirely natively (e.g. Erlang's `term_to_binary` function), then you can't "break into" that primitive to extend it unless the native runtime has been extended with an explicit "upcall" hook back into the VM userland.
You've linked to your site on at least 4 comments in this thread, which is really taking away from the comment experience. I am glad you're enthusiastic, but make sure the disclaimers don't take away from your (otherwise great!) comments.
I strongly disagree, and I think your example actually supports this position. Most people don't modify their cars at all, but for the people that do find great pleasure in it. We see this with software, too. I think that opposing people who would like to mess with the things they use is limiting and fundamentally somewhat disappointing. It's essentially impossible to make software that caters to everyone's needs, but I think the best you can do is make something that works for most people out-of-the-box and let the few people who want to tweak things to their liking themselves. (To extend your analogy, I don’t know the first thing about cars but I have very few complaints with mine. If I didn’t like something about it, locking me out from changing it prevents me from taking it to my car-enthusiast friend.)
But this is a lot of effort to please a small part of the audience. And if you don't get it exactly right, it'll expose your entire customer base to security concerns.
I'm old enough to remember the rounds of Word viruses spread by email.
90% of the users of a piece of software won't modify it if they could. 90% of those who modify it, will create crap. 90% of those who write something useful will be benign. But the remaining 0.1% will write something evil that means everyone suffers. And suddenly it's your fault for allowing this, and your problem to fix.
>> from the hillbilly type VM bug/truck combination to the amazing Cobra replica that few can tell from the real thing.
lol! I'm a hillbilly living in the Ozarks who used to make custom cars for movie stars in Hollywood and race cars for guys like Larry Fullerton and Tony Nancy.
Your comparison to physical systems is spot on. There are very few physical devices that are malleable in this sense. We'd be better off advocating for more open source, better development tools, and new programming paradigms or languages designed to help novices learn. However, we are overdue for a revolution in how computers are programmed. Editing text on a screen is straightforward and fairly universal but we should be trying to take advantage of cheaper and more widely available user interface devices to improve how we translate thoughts into programs.
Yet automobiles are malleable :) And what about farmer's equipment? The farmers, for example, definitely want it malleable (see e.g. ["Nebraska farmers vote overwhelmingly for Right to Repair"][1]).
I would say that the more generalized software is, the larger the audience, the more likely there will be Strong Boundaries between the Practical and the Theoretical.
It's mostly a pipe dream but it's not an absolute zero.
See how Nintendo crafts its games, they layer complexity through symmetry in early levels.
I agree that most users are not in a place to solve their problems, especially with the state of software as it is (too many ad-hoc rules). But when presented with the good lego bricks, I believe people can progress.
And yet there are tens of thousands of userscripts, even without any "malleability" intended by website authors, just because platform allows some introspection and extension:
I must admit I laughed out loud reading this. The real world is complex. The problems software solve in the real world are complex. Economics, sales, politics, personalities, random world events, competition etc. has always and will always be a force pushing software development in non-ideal directions. In my experience, software that is twice as malleable takes four times as long to write, making you dog food for the competition. A better choice (again from experience) is to think like an Engineer: find the least cost good enough solution to the problem. Then implement it.
Agreed. The only place where software that the OP wants to create can reside is in the hobbyist community or free, open source community. In the real world this isn’t cost effective or viable.
The only place where software that the OP wants to create can reside is in the hobbyist community or free, open source community.
That was my instinctive reaction as well. Existing commercial models would not support this model of building software. I'm not sure most existing FOSS models would either.
Whether this is a problem is a different question. Maybe the kind of software that is interesting to a large enough group of people for this idea to be relevant should be created and developed by the community of its users, collaboratively, over a long period of time, without financial gain being their main incentive to contribute.
I think what the community behind this site is looking for is probably a form of social change more than technological change. There are lots of people out there who have the skills to contribute to software, and there are lots more people who might learn those skills given sufficient accessibility of the knowledge and awareness of the possibilities. Right now, it's just too difficult for the potential contributors to make actual contributions. I can think of many factors that play a part on that, but none that seems to be inherently impossible to overcome.
I largely agree with you here, and this entire thread has been an interesting read for me.
I’m actually writing what is turning out to be a research paper ( it started as a blog post ) about software development today.
What the OP is looking for _is_ a change, but it’s not just a change in society. It’s also how software development is viewed as a discipline and a career.
Very roughly put: software engineering is not a respected discipline, not to the level of other engineering disciplines. To create what the OP is looking for, The idea and viewpoint of software engineering needs to be changed as a whole.
To go into it more would be too much in a single post, and it’s part of the reason why I’m going down the rabbit hole with my paper.
I encourage you to share your paper when it's finished. This is an interesting discussion that I'd like to see more people joining.
For what it's worth, I don't think "software engineering" should be accorded the same respect as true engineering disciplines. What we do has little resemblance to real engineering in terms of either the systematic, tried and tested approach or the ethical foundation. It is something I think we should aspire to, but we are a long way from reaching that standard today.
That said, I don't think everyone needs to become a full-fledged software engineer to reach the goals set out in the linked piece. If we do ever achieve something like that, I think it will be because the core of the systems is carefully constructed by experts, and because it provides methods for "malleability" also carefully constructed by experts so that non-experts can use them. That recipe has proved successful in the past, in a wide range of applications from game mods to Excel macros, and I see no reason in principle that it couldn't be used much more widely for other types of software if there is some incentive for the experts to build the core and the extensibility framework around it.
> Agreed. The only place where software that the OP wants to create can reside is in the hobbyist community or free, open source community. In the real world this isn’t cost effective or viable.
You're talking about a world though where only large enterprises are served well by tailored software. Malleable systems don't make sense in large enterprises and global-scale solutions
But what about all the spaces currently underserved by current "best practices"? Smaller organizations, closer to the ground, where things are inherently less scalable.
Think of all the places it makes sense to use WordPress, but where information/workflow tools being having a website are needed
My New Years resolution this year was to stop taking jobs at places where I spend so much of my time trying to wrangle my coworkers toward my model of what constitutes sane processes. Worked with a lot of people who think like OP and I am just done.
Also I think you’re being very generous with that 2:4 ratio. Malleability isn’t a multiplier, it’s an exponent. The number of interactions is factorial, and the overhead for managing all of that state is more complexity on top.
Ok... I am a self taught "app maker". Not CS grad. Not a "software engineer".
I use tools that other's here create to make apps. What they do is way over my pay grade but I can use well written and documented APIs, especially when some example code is supplied.
I think I'm closer to the target this applies to. Compared to the folks who create something like jQuery or PouchDB.js, I'm a laborer working in the fields with the tools (APIs) they create and provide.
What I do is more akin to a craftsmen than a "software engineer". I'm sure some of what I've made is pretty crappy from an engineering viewpoint. I struggle sometimes to get stuff to work. But I also make software that some people love using.
It's a pretty low bar to learn how to use HTML, CSS, and even tools like jQuery, Bootstrap, and PouchDB.js. What you can make with them can be very useful, and very specific. And it can be fast and easy to make too.
I guess what I'm trying to say is the bar is now low enough that we can start teaching others how to make apps as a trade skill.
First, big props for shipping stuff that people use.
Second, I've [1] often compared 90% of what we do in software development to being highly trained baristas. Granted, this buys me little love from engineers & even less from Starbucks.
To the point -- I think it's time we rise up & start to use more powerful tooling.
Tooling that:
1) Lowers the barrier for who can author software (really, web-based interfaces that help us get stuff done, the way we want to accomplish a task)
2) Doesn't introduce more chaos (disparate data spread across 10 SaaS products, death by 1,000 spreadsheets by email or in Google drive, no centralized/secure/managed storage, etc)
3) Emphasizes that a superb user experience is table stakes for such tooling.
Thoughts?
--
[1] founder of https://mintdata.com here, so just a tad biased, take the above with a few pounds/kilograms of salt.
[2] oblib -- DM me, we're happy to give you free access if you'd like -- the above wording just warms our collective hearts.
I'm going to disagree here, though sort of understand what you and the original poster are saying.
In short I've seen the mess that 'unsilled' people make using complex tools. Tools such as databases (my area) are presented as being easy to use by microsoft, who make a lot of effort to make it easy to use. Too easy. Not because I want to keep people out, far from it, but because if they don't know what they are doing they get only so far, then things go bad and they've no idea why. Drag/drop, point/click only goes so far.
I guess no complex tool can be (or should be?) used with concomitant levels of training. It's not an argument for code gurus to make themselves a comfortable walled garden to preside over and keep others out, it's an argument that tools should come with training, always.
The problem these days is the hirers just want everything (blah blah full stack blah) and don't understand the cost of getting it wrong because it works - up to a point. MSSQL, Spark, Kafka, down to failure to understand how CPUs work. They all get treated as black boxes, and that's fine up to a point. Then things break or don't scale. Out of my sphere I see so many websites that have no basic understanding of usability, or standards, or accessibility, security and by web devs that barely understand HTTP.
If it's plain line of business, unimaginative gruntwork that keeps a business alive with spreadsheets etc, then that's what's needed and basic understanding is sufficient. I've done plenty of jobs like that, they keep the economy going, but if you want heftier dev work, I don't think that will suffice.
I guess that makes me sound a snob. Not intended that way, just saying complex tools may not be usable to their full capacity without understanding them. I may be wrong too.
But... so am I. The gap between us is education. What I propose is that one could take a class to learn how to make apps, as oppose to a CS class. I don't think a GUI app maker is the right approach to lowering the bar to custom apps. I think learning how to use API's is.
HTML is a great way to start. Then CSS. Then basic Javascript. From there you start bringing in tools like Bootstrap, jQuery, and PouchDB.js. From there you teach how to integrate most any JS toolset, like Mustache.js and Accounting.js. At that point things like React and Angular are already accessible.
When you focus on the tools and techniques to build apps it becomes more accessible and, yes, there will be some really crappy apps made, but there will also be those who do it very well.
Now... if you add into that mix turnkey opensource apps that already do something well that anyone can twiddle with it can really make a difference for all kinds of businesses.
Big businesses already have access to that kind of specialization of software but we're close right now to being able to lower the bar for medium and even small businesses to have affordable access to tailored apps. Local shops that make websites could offer those services to local businesses.
So I see it as a way to raise all the boats in the waters. But yeah, there will some trash and flotsam there too. Same as it ever was really. Just lowering the bar of entry.
I don't know, if a system exposed a EAVT database like Datomic where you don't need to normalise tables and you don't have to index manually and the n query problem isn't an issue then I think a non expert could get a lot further with the right UI then someone given a traditional SQL/NoSQL database
It's easy to make things complex and its hard to make things simple but if we can simplify to a data model then we can do declarative programming and make systems more tenable
Thinking in declarative data models is hard if you're not used to doing it, I'd recommend looking at the following libraries in Clojure: Garden, Hiccup, HoneySQL or Drupal's form management, or kind of ReactJS if you stretch your idea is what data is, they can all handle complex but focused tasks
I have to be sceptical about what you say, so understand that comes from lack of knowledge of what you're getting at (but I do know my SQL very well, and I do know declarative programming, of which SQL is an instance)
> EAVT database like Datomic where you don't need to normalise tables
That is a strange comment to me. Normalisation is not about databases but about data management. The value of normalisation doesn't go away just because you're using a different DB, because it's solely about relationships in the data.
> and you don't have to index manually
Woo, don't know what to say to that. Could you provide some pointers to both of these (normalisation and indexing) and I'll do some reading, thanks. I find that very hard to believe (no offence!)
Agreed about the 'hard to make things simple' but
> if we can simplify to a data model then we can do declarative programming
That data model is arguably normalisation. SQL is declarative, and I've seen the results of ignorance applied to that, so I can't buy that just using SQL protects you much (although it does to an extent, I suppose).
Declarativeness makes things easier on the programmer up to a point. When things break, there's not much you can do - and things can break easily in SQL. Writing good SQL, like any program, takes skill.
Also I recall many years ago finding out the hard way that you have to understand prolog's (relatively simple) search strategy to get decent results from it. Just 'declaring' your relationship between input and output didn't work at all well (I don't remember the details but I remember the lesson I learned).
And its sort of an insult to training, since there is very little actual formal training. Over 95% of the processes I use in developing software and systems are ones I've learned on the job, not learned while learning to be on the job.
An apprentice system would work extremely well with software development.
I learned to develop software [1] before the interwebs, where we would just kind-of hack things together in C/Unix. Manuals were our only (& best!) friend.
I then went to uni for a CS degree, and they had a very "holy grail" attitude about the whole affair.
I agree that:
1) an apprentice system would work better
2) we have got to get more powerful tooling out there, into people's hands
[2] I'm still reminded of a world where the bridge between creating & using software was much smaller (not to mention, user interfaces were much snappier!) Here's a virtual toast to hoping we can one day come back to that reality.
I'd agree with this, but I'll also mention that my undergrad degree was actually phenomenally helpful. One or two classes were legitimately beneficial for what they said on the tin (i.e., taught me useful concepts more quickly than I could have learned them on my own, or taught me concepts I never knew I needed, such as agile methodologies), and the rest presented problems in a space I could comparatively safely 'fail' in, that I had to figure out how to solve on my own, both technical and people based ones.
But that said, I also recognize that that may be the exception, and that on the job I might have learned the same things in less time.
That's a pretty sweet looking tool set you've put together. That looks like something small businesses would love have someone onboard that knows how to use it.
I'm not a spreadsheet power user. I've not really done much at all with them, but I'd love to tinker with Mintdata!
Well, I think this is one of the "gatchas" in an idea like this. The malleable system you're talking about is only "a bit malleable", you can take it's elements and add a bit more.
This is something like a "producer-consumer" model. To build a small or a medium-sized system from such standard elements is now reasonably simple - the original component set seems malleable. But once you've built your medium-sized system, changing it actually becomes hard, the resulting product is no longer malleable.
Interesting to put this on the front page next to the "immutable infrastructure for mutable systems" one.
The thing is, these drives pull in opposite directions. If it's easy to change it's easy to break, and changes drive entropically downwards as it's always easiest to make the quick and dirty change.
Malleable systems are great for expert users, but can easily be turned into a ball of mud by a non-expert. Hence the popularity of Chromebooks.
Security also enters into consideration. Anything that can be programmed can be attacked. Any program that reads external data is an attack surface.
Look at the history of e.g. web browser extensions, and how they have become more and more locked down to prevent abuse by bad actors.
On the other hand, malleable systems are responsible for enormous increases in the productivity of computer users. We may all hate kludgy excel spreadhseets with a nightmare of tangled hacky VBA code, but you see that sort of thing in a lot of businesses' critical workflow because that kinda of malleability by "untrained" users was invaluable in the earlier life of the business.
It's still invaluable in highly locked-down office environments. It lets users automate things that they wouldn't be allowed to automate otherwise (needing a full development environment or the ability to run arbitrary executables or access to Powershell).
I can see immutable infrastructure and this concept working in a copy-on-write model. For instance, if a service is up and running, a modification can consist of duplicating the infrastructure so you have your own copy and then making the modifications. It localizes the change, and in theory keeps the original unmutated.
I have no idea how this would work in practice. This manifesto seems to go against the grain in other ways, as I'm not sure how it can coexist the economic model of SaaS that has emerged. Also, as others have said, the security and UX implications are big concerns that threaten the feasibility of this idea.
And in the intersection of mutability and immutability you have fantastic experiences like live editing and time travel. /Where/ the interface for mutability is located is essential.
There are strong engineering impetuses for control, for provability, for ahead-of-time decisions, for immutability. Engineers get woken up when things go wrong, they get blamed when things go wrong, & they want, they have to deal with other engineers code doing bad things with their code, & the software world has a wide variety of strategies, tools, & outlooks that all try to build safety, certainty, that restrict & constrain systems, to keep anything unexpected from happening.
I feel, though, that characterizing malleable systems as being for expert users alone is a great injury. At present this may match expectations, but I don't think it's fair to make an assessment of how or what computer-users want when, at the moment, they don't really have the options. There are too few malleable systems out there to judge. What general malleable systems we do have require advanced expertise to use- AppleScript, DCOP (Desktop COmmunication Protocol), DBus, WebExtensions all require at least some programming style skills to use. But I think this expertise is owing to a broad lack of investment in building malleable software. Most software strives to give the user a specified & specific experience, not be a malleable system folks can get the experience they want out of. I don't think we know how much latent pull users have, how strong the tension is to be released from engineer's certainty, because I don't think there are even many testbeds, much less a broader societal view of systems flexibility, systems adaptability, of softer software.
To me, I just think it's clear that, whatever valid concerns we do have, whatever justification we have for building endless arrays of rigid systems, it seems like it's not worth it anymore. We keep making more and more software, the profusion of software applications is a torrent of similar-but-different applications, focusing on shuffling this or that kind of data specifically, yet the generality, the inter-connectedness of the pieces seems so weak, and it seems like we've shut users out from having any creative capabilities & control of their own.
I think WebExtensions are a very interesting case. The killer aspect, to me, is that the extensions work invisibly. We can't rely on everyone becoming an expert extension operator, but I do think that some reporting mechanisms on what extensions do would radically reshape how the community review's & socialized the extensions they use. In many ways, it's that the extensions are a means of exercising the malleability of hypertext, but they themselves are rigid & unmalleable & opaque. A part of being malleable- a precondition- is making yourself visible, providing viewability into what is happening, such that folks can reshape & redefine & redirect those actions that you the software are enacting. So the circle needs to grow more, is what it seems to me- rather than lock down extensions, rather than rely on authoritarian centralized powers to moderate extensions, my plea would be to make extensions malleable too, to make their working visible too, and to try to let consciousness & human adaptability start to take hold in this realm.
You're absolutely correct that there are good reasons why software is so rigid & inflexible & opaque. Security has been nightmarishly cumbersome to the whole industry for the last decade. And I think malleability does indeed open up a lot more surfaces, endlessly opens surfaces. But at some point, we have to develop systems that have a co-relation with their operators, that respect the operators, that let them see what is happening & let them work their own systems. Continuing to try to push computers forwards entirely on the assumption that the user is worthless, does not know anything, and can not understand what is happening has cornered us, has cut off the chance to try & explore other options.
> Every time you provide an option, you’re asking the user to make a decision.
The best-of-both-worlds is when you give users an option, and then you pick a sensible default for it. Some of the best options are the ones that 90% of people don't even know about, but the 10% that do can find when they need it. I can't tell you how many hidden "defaults" commands I have run that are not even exposed to the UI but make my life much, much nicer.
You know, there are plenty of drawbacks on the act of offering options to the user, but the article touches on none of them.
It has really the wrong focus. It talks about pop-ups and distracting messages; it talks about surprising interfaces; it talks about actions that are hard to undo. But it doesn't talk about any problem that options cause by themselves.
> It’s true that the first time they realized you could completely remap the keyboard in Word, they changed everything around to be more to their liking, but as soon as they upgraded to Windows 95 those settings got lost, and they weren’t the same at work, and eventually they just stopped reconfiguring things.
Who is "we"? That didn't describe me in 2000 and describes me even less today.
Being able to moving the taskbar is bad because his friend shot herself in the foot this one time, but "we all love Winamp skins" -- because the person whose winamp turned unusable because they double clicked the wrong file just didn't happen to be friends with Joel. Sure, don't make software for me, but don't try to rationalize it with such gymnastics.
Why not? Maybe I have a lot of monitors and want to dedicate a small one to a taskbar where I see the full pathname of each editor that has a file open.
People who just randomly click things without even realizing they're clicking may just as well delete random files or worse, the taskbar taking up half the screen is about the most harmless consequence I can think of. If it's really such a worry, add a dialog that asks the user "are you sure you want this?" for anything "weird", with a checkbox to "never ask again". If people "accidentally" click that, too, that is really their problem.
Knew a girl who complained that her computer was too slow. As it turns out, she had added every add-on toolbar to Internet Explorer she ever came across. Her usable window was about two inches tall. Yet, it was the speed she complained about, not the utterly unusable web browser window.
I do this. Just yesterday, I changed a large piece of software.
I didn't have any source code for it. I think it may have been written in C++, but that didn't matter. I had the *.exe file, which was all I needed.
I used IDA Pro to study the software. You could use Ghidra, Binary Ninja, or Hopper Disassembler. They are pretty similar. I used xxd and joe to modify the software. You could use dd and echo, or a proper hex editor.
I found the function that was annoying me, practically causing the software to be malware. I inserted the bytes 33 DB (a XOR opcode) near the end of a function, then removed the bytes CC CC (alignment padding) from right after the function.
I seem to have hit most of the points in that article. The big miss was the unreasonable dream of "all experience levels". That just won't ever be reality. Mastery of a disassembler is not a beginner skill.
Software engineering is about solving real problems. The only time you need to write malleable software is when the problem needs a malleable solution.
Pushing for everyone to write malleable software sounds like telling people to forget about the particular problems that they are solving, and telling them to solve a general problem instead. Now you’re writing a complicated system for general use, rather than a simple solution for a well-defined problem. Your solution will now take longer to develop, be harder to understand and MUCH harder to test properly (way more use cases).
Obviously some problems need general solutions. But if every problem needed a general solution nobody would get anything done.
> The only time you need to write malleable software is when the problem needs a malleable solution.
The point is that it’s very difficult to account for this beforehand, especially as the author of the software. (“What do you mean we should make this an option? 5 works for me, and I’ll change it in the code if it doesn’t!”) When designing software, it’s always nice to give a thought to “if I were using this and I wanted to change it, how would I do so?” and seeing if there’s anything you can do to improve that.
Complexity has to go somewhere.
Because systems are just piles of abstractions and conventions "easy to change" is very relative. A compromise will always be required. A simplification here means pushing complexity somewhere else.
What I think is a practical way to see it is to have a good match between your audience mental models and the system conventions.
> A simplification here means pushing complexity somewhere else.
And it is a good idea to concentrate complexity on a few small pieces of software that only experts are contributing to. Think your typical web applications and databases: your app doesn't need to think about how data is laid out on disk and how on-disk indices are implemented; all of those complexities are pushed into the database. It's a good compromise.
In my experience, this line of thinking tends to lead to inefficient and hard-to-manage large projects based on languages which are convenient on a small scale.
I mean, what percentage of users are adept enough to change a program they use and then what percentage of them would even want to go through the trouble of doing it. Moreover, why should we go through all the hoops needed to encourage novice/non-programmers to modify things they don't really understand anyway.
This seems like a solution/problem in search of a problem.
If you want to talk about text editors in that post, Emacs is wildly, almost obscenely configurable (I'm currently using Emacs as my Linux WM[0]). Vim has its own scripting language that you can use to write custom bindings and configs, and they can get similarly complicated. Sublime has a pretty good Python API. Even Nano can have its syntax highlighting extended.
The extensibility of those systems are why they've stuck around. The composability of Vim's keybindings are why people keep reimplementing Vim modes in different pieces of software.
If you don't like Electron, fine, but what does that have to do with building malleable systems? Can you name me any popular native text editor that's used for serious work and that doesn't support plugins?
> Moreover, why should we go through all the hoops needed to encourage novice/non-programmers to modify things they don't really understand anyway[?]
This is like asking why a programming language should have power-user features in it. I mean, by that logic why would anyone add pointers to a language like C? Pointers are hard, and most programmers aren't very good with them. Why go through the hoops needed to encourage programmers to use a memory model that most novices don't understand very well?
If you design a system or program only for beginners and no one else, then only beginners will use it. Once they've matured, they'll leave so that they can find something else that meets their evolving needs. Good software design considers the entire life-cycle of how a user will interact with a program as they develop and learn new features.
Many users will never get to the point where they'll want to program extensions themselves, but even they will often benefit from having extensions written for them by the surrounding community.
I can think of some examples of software that's like this now. Like Excel, BI tools, workflow tools etc.
They don't cover all the requirements outlined in the article (like the bit about it being fun & easy to use), but we're a long way from this being 'impractical' or a 'pipe dream'.
We are better at making software that is easy to change, than at making software that is easy to use. Look at most FOSS. :)
When programmers are left to control the requirements (no product managers anywhere in sight), the program structure will be still reasonably maintainable and thus easy to change, as a matter of pride before other programmers, but the thing will be hard to use for non-programmers.
"Oh, just give it --help and you will get the expected input language in crystal-clear EBNF ..."
Software is malleable. Who is attributed with that famous "software is infinitely malleable" quote, in fact? Fred Brooks?
Of course, Brooks believed that a program was "incompletely delivered" without source:
"The dependence upon others has a particular case that is especially painful for the system programmer. He depends upon other people's programs. These are often maldesigned, poorly implemented, incompletely delivered (no source code or test cases), and poorly documented. So he must spend hours studying and fixing things that in an ideal world would be complete, available, and usable."
I tend to design software in what I call a "layer cake" fashion.
Lots of layers, with clearly defined domains, strict APIs, and each with its own project lifecycle.
I also tend to "design with SDKs." I treat each module asn an SDK, which means that I pretend I don't know exactly how it will be used, and I use my "S.Q.U.I.D." methodology on it.
I see this and then I think of all the success of strongly opinionated things like Rails. Having one way to do things forces you to focus on what’s most important, make choices and get things done. It keeps you away from premature optimization and analysis paralysis. Yes, you could split things into 100 different pure pluggable microservices but should that really be where you put your resources at the beginning? To me, engineering and for that matter everything you do isn’t any doing things the perfect way— it’s about doing them the best way you can and moving on to the next problem.
Some things work well together in a functional pipeline— Linux commands that do one thing for instance. EMRs are famously malleable too, for better or worse.
But could you imagine being IT managing 1000 PCs all with slightly different versions of word processors or something like that?
In this sort of world, how do you prevent users from making bad queries and hogging all the database resources or even crashing the database? How do you prevent users from wiping out data? (I get that you could back it up and restore it, but depending on the company, this could cause serious downtime and loss of revenue).
2) Competition in established industries is fierce (finance, real estate, education, construction, etc)
--
To the extent possible, I'd:
a) time out bad queries
b) add more hardware where feasible
--
For the "wiping out data" part, I think:
1) Mark things as "removed", delete when {GDPR,CCPA,etc} requires it
2) The UIs designed can/should have human processes in mind -- that is, things like multiple people involved (async) on what to do with a piece of data (flag it, mark it removed, send it for approval, etc) -- what a lot of companies are calling "workflows" and "workflow management platforms" these days.
3) Backup/restore should be used only as a last resort (DR and the like)
4) Versioning like Google Sheets of the UI is paramount and integration with CI/CD systems is crucial to having the cake (allowing people to create the tools they want) while also eating it too (making sure IT doesn't drive the above-mentioned dune buggy off a cliff in protest)
We store all the data in the database and the applications query the data/ process and return. We actually have multiple applications in various languages accessing the data (php, java, perl...).
Its genetic data so a lot of tables are shared across our applications. Gene symbol lookup for example).
When I started in this field I worked with Lotus Notes. Its actually a shared nosql database which users can create applications that query it as they see fit. I was clunky, but it could be effective in making custom team tools.
This isn't about building software similar to what we currently have and just making it more end user configurable. This is about rethinking software so that end user malleability makes sense.
What does such software even look like? We want some kind of end user modifiable artifacts, still contained within some constrained substrate. Think Minecraft or Excel - I can give you a spreadsheet to calculate some taxes, but you can then dig in, change the formulae, remix with other sheets, and so on. Most software is not like this - typically you get some pre canned views and switches, not a canvas of composition.
Complexity can only ever be moved, never removed. This means complex problems are impossible to make easy, only a single part of a problem can be swayed with equal or worse cost to others.
This is best seen in abstractions; they make certain functions much easier to do on a routine basis, but they are less flexible than the functions they represent, and thus you end up with creating more and more abstractions, cases for them and at the end, end up with the same or more chaos than you had in the beginning.
This results in what I'd say futile work being done trying to fight it without even realising. You can't do something complex with less, or it wasn't that complex in the first place.
To make a truly malleable system, is to make a fractal, which is by definition impossible to solve. A problem(fractal) always requires a problem-area and accuracy in order to solve it, in that accuracy. There is _always_ an equal or worse tradeoff.
Abstractions in software are just that: we write a piece of code to do what we want to do and then we only supply parameters. Once we want to change what we are doing, we either have to go back to basics and come back with a new piece of code or somehow piggyback the existing code to fit the new scenario. The problem is that after we've written the first piece of code, we consider the thing solved. The first piece + parameters becomes the way of doing things and we are very reluctant to go back to basics again. So we start to pile up abstractions. But we don't have to. We can totally go back to basics and reduce the number of abstractions.
We cannot overcome the inherent complexity of the task (The Law of Requisite Variety) but it's not this complexity that bites us: it's the complexity of added abstractions.
This is something I like about Plan 9 (and I like basically everything about Plan 9). The entire OS's source code (the kernel, userspace, everything) is available at /usr/src and can be compiled and installed with these commands:
cd /
. /sys/lib/rootstub
cd /sys/src
mk install
It takes about 5 minutes. Want to cross-compile for another architecture?
cd /
. /sys/lib/rootstub
cd /sys/src
objtype=arm64 mk install
If you haven't tried Plan 9, you really ought to spend your next spare weekend messing around with 9front.
Unfortunately I think plan9 only remains this way by keeping the people who use it small, being elitist and somewhat unwelcoming[0]. That's not to say I wouldn't also recommend spending a weekend messing around.
I was recently having a conversation about the unix philosophy on reddit[1]. As nix and software built for nix became more and more mainstream it becomes harder and harder to keep with the unix philosophy and the philosophy of this post.
The reading material is "unwelcoming" to discourage coming into it with any preconceptions. If you go into it trying to make it do $x, you'll have a bad time and send lots of annoying emails to the maintainers. If you come into it wanting to learn about it and derive an $x from your experience, you'll have a better time.
Plan 9 is lucky - by design it is self-contained and has no dependencies.
One of the hardest parts about building many complex open source projects (particularly on Windows and macOS) is getting their dependency stacks in order.
I'm not Drew DeVault, but I think there's value in taking responsibility for your dependencies. You don't need to understand all of it on day 1, but as you use it and run into issues you should be gradually understanding its internals better, gradually resolving features that you use vs features that you don't. That then gives you the option to rip out stuff you're sure you'll never need. Or to rip out stuff you may need in future, but is easy to put back (this is analogous to deleting code without fear because version control).
One "value" of of libraries is that it allows us programmers to spread out responsibility for breakage. What if we didn't do that? What if we exchanged software in units of working computing stacks, to take the idea to an extreme. It seems unquestionably useful to the world. So it seems worth exploring how close to that ideal we can feasibly get.
89 libraries is a lot. Even the most complex project I work on has just 20. The list you gave seems somewhat reasonable, but I'm willing to bet that the long tail of stuff you didn't mention includes a lot of more trivial dependencies. I don't know your project, but I imagine that you ought to redefine your scope to avoid introducing a need on complex dependencies. Simpler software is more robust, more maintainable, and often gets the job done better - but it's more difficult to make.
In any case, I'm mainly criticisng the typical JavaScript, Ruby, etc project, with their giant pile of node_modules or whatever the case may be. It can easily get out of hand with other projects, too, especially in languages which encourage over-engineering and over-abstracting like Rust or C++ (let me guess: yours is the latter?).
As a trivial example, I spent a couple of hours last night pruning stuff I didn't need from one of my text editor plugins (https://github.com/mhinz/vim-signify). I was able to take it down from 1400 lines to sub-300 lines. And in the process it became a lot clearer precisely what the algorithmic core of the library was. A valuable learning experience.
vim-signify was 800 lines large back in 2014 when I first started using it. How much of the stuff added since mattered to me? Almost nothing. These are the costs we all take on -- death by a thousand cuts -- when we blindly import dependencies.
C++ (with a tiny bit of assembler). Most of our dependencies arise from our choice to use GTK, but there's a fair number that do not.
I wouldn't call any of them trivial dependencies.
Edit: on the order of a dozen of the deps are actually requirements for the build system of libs we depend on (e.g. autotools nonsense), but from a practical perspective, they are dependencies that have to be satisfied if you want to build Ardour completely from scratch (by which we mean, alternately, a Linux system with only libc, libstdc++ and X, or a new macOS install).
Build deps aren't free, but have different constraints and effects.
Anyway, if this is your full dependency tree - with transitive dependencies included - this is fairly reasonable. I reckon Boost doesn't need to be there, and you're probably well past the point of diminishing returns with GTK+ and a UI like yours, but otherwise this seems reasonable to me. You should also consider moving on from autotools by now, I recommend Meson as a replacement.
These kinds of questions - is autotools still a good choice? Do we really need Boost? Is GTK+ doing us any favors? Answering these is the "careful keeping of your dependency graph small and conservative" that I am referring to.
I mentioned autotools as part of the build system of some of our dependencies. We abandoned it ourselves in disgust many years ago (in favor of waf; meson may be in our future).
Boost is critical for us because we aim for compatibility (and for macOS that means buildability) on systems only reaching C++98, meaning that boost fills in large parts of the then-missing std:: namespace (such as shared_ptr<>). We will move to C++11 after our 6.0 release, but boost still provides several other very useful bits of functionality.
GTK+ remains useful even as we slowly move away from it, and there remains the big four (text entry, file browsing, treeviews and menus) that will be a huge effort to reimplement. Our goal is to eventually depend only upon GDK (i.e. a thin window system wrapper), but I doubt that we will ever reach that goal.
1998 was 22 years ago. At some point we've gotta let it go. And I'm not going to argue about Boost, but it'll suffice to say that I passionately hate C++ and Boost is a big part of why. That Ardour is a C++ project has a lot to do with its sprawling complexity, these are often correlated (which is why I guessed C++ from your earlier description). Also, I would definitely recommend looking into Meson instead of waf, having used both.
Anyway, I think we have reached something resembling an understanding. Your dependencies are fairly reasonable, but there's room to and reason to cull them down a bit.
The question to reconcile in my mind: Malleable systems with reproducable builds.
It's very easy to get your hands dirty with the bytes and add various assumptions and dependencies. Most "hack solutions" go in that direction. But reproducing the modification in some larger context changes the assumption: What if the dependency isn't there?
What tends to make the system stable over the long term is the thematic assumptions: Certain broad goals that never shift, even as the specification does. I think that's the factor missing from having malleability itself as a goal. You already have it, that's software.
I have thought about this for a long time. For me the meaningful change means it needs to work at the OS/GUI level.
What people miss in this thread is there 2 kinds of complexity we deal with - one that is fundamentally understanding the field and a second that is understanding the complex tooling required to do things. The latter has been getting much more complex lately (frequently multiple programming languages and multiple software packages) and a good set of fundamental tools that can operate in a loose environment is absolutely a big thing to aspire to
A good deal of complex software is abstracting away on existing complex systems that you run on. without understanding the underlying system, you can make guesses about what something is doing but that wouldn't be ideal always.
For example, package management.
Your package manager might verify the files, check for existing configuration, setup permissions, directories, do dependency resolution, and lot more.
In practice, it is just package manager install something.
How will you modify the package manager without understanding what exactly is it abstracting away from you?
> Atlast is an attempt to make software component technology and open architecture applications commonplace in the mainstream software market. It is both a software component which can be readily integrated into existing applications, providing them a ready-made macro language and facilities for user extension and customisation and, at the same time, it is a foundation upon which new applications can be built in an open, component-oriented manner.
> Atlast is based upon the FORTH-83 language, but has been extended in many ways and modified to better serve its mission as an embedded toolkit for open, programmable applications. Atlast is implemented in a single file, written in portable C
I think you would want something like Oberon or Smalltalk (Squeak, Pharo) or Jef Raskin's Cat[1], or even Emacs. (Secretaries were extending Emacs using Lisp in the 70's.) The thought occurs to me that OLPC didn't fail due to Sugar[2], eh?
Been in the industry for 20+ years and never once have I seen anything remotely resembling what this manifesto describes. I've seen several attempts to go in this approximate direction, though, each ending in a disaster because what's described there calls for a drastic, "be all end all" over-design that either doesn't do anything useful at all (example: "zero code"), or is so complicated that nobody can figure out how to use it (example: one of the distributed systems frameworks I worked on at Google - very general, reusable and modular, but you'd need a PhD to even understand it conceptually).
This seems to be written by someone who doesn't really know what they're talking about, but would like to "help" by "writing". Reminds me of this scene from "The Fifth Element": https://www.youtube.com/watch?v=CYHO7FRsQAA
It's akin to looking down into a shaft of a coal mine and suggesting that miners should breathe fresh air and stay above ground, because what they currently do is not good for their health.
There is a bit of a learning curve. I guess Hypercard and Lotus Notes would have be the easiest to learn. Maybe throw Excel in also since it can be used (or abused) for a lot of things.
Emacs never really got the "popular test" that something like Hypercard did at a very important time in the adoption of personal computing, probably because most normal personal computer users didn't have access to it. But what we do see from the Hypercard days is a completely different -- even opposite -- relationship of average people to their computing media than we have today. It's both a compelling demonstration of what's possible and what's been lost
There's a difference between "easy" and "simple." Emacs is not easy to use, but learning it may lead to simplicity. Have you ever noticed, there's a correlation between something hard to learn and simplicity and vice-versa? i.e., something that is easy to pick up, very often over time, becomes hard to extend and not so simple to use anymore. Experts use tools that are not easy to learn. But learning them simplifies things for them later.
Can you imagine if someone tries to re-design a cockpit of a commercial airliner and replace all the instruments with a single touch-based display, a la Tesla, because that's more intuitive and easy?
Web user agents do this the best. You can run code in the console, delete elements, make that persist with an extension. And there you see the best outcomes and the dangers ("This is a browser feature intended for developers. If someone told you to copy and paste something here to enable a Facebook feature or "hack" someone's account, it is a scam and will give them access to your Facebook account.").
Alan Kay: It was kind of like what we have today, but better. It was a completely integrated system that was not made up of applications, but "objects'' that you could mix and match anywhere. You've got a work area and you can bring every object in the system and you just start: You've got every tool, you've got every object, and you can make new objects. It was programmable by end user, and it had the famous GUI.
Larry Tesler: It was the first one that was graphically based–overlapped windows, a mouse, stuff like that.
Adele Goldberg: It was very much a GUI demo.
Dan Ingalls: I was scrolling up some text and Steve said, "I really like this display."
Alan Kay: At that time, when text scrolled, it did so discreetly: So jump, jump, jump–like that.
Dan Ingalls: And Jobs said, "Would it be possible to scroll that up smoothly?"
Alan Kay: "Can you do it continuously?" Steve liked to pull people's chains.
Dan Ingalls: Because it did look a little bit jerky, you know?
Alan Kay: So Dan or maybe Larry just opened up a Smalltalk window and–
Bruce Horn: –changed a few lines of code in Smalltalk, and in the blink of an eye it could do a smooth scroll.
Alan Kay: And so it was like, "Bingo!" And so Steve was impressed but he didn't know enough to be _really_ impressed. The other Apple people just shit in their pants when they saw this. It was just the best, best thing I've seen.
Dan Ingalls: I think that sort of blew Steve's mind. Certainly, just about anybody who worked on development systems responded really well to that particular demonstration. It really showed how the Smalltalk system could be changed on the fly and be extremely malleable in terms of stuff you could try out in the user interface.
Alan Kay: On Smalltalk you could change any part of the system in a quarter of a second. These days it's called live coding, but most of the stuff today–and virtually everything back then–used compiled code, so your programming and editing and stuff like that was a completely separate thing. You had to stop the system and rebuild.
Dan Ingalls: So you had a regular sort of programming environment sitting there in front of you with windows and menus and the ability to look at code and stuff. But the neat thing about it was that if you edited the code, it would actually change the code that you were running at that very moment. The object-oriented architecture made it really, really quick to make changes.
Bruce Horn: Even today if you wanted to change something at the level of the operating system in MacOS or Windows, it's simply impossible unless they have decided to give you that option to smooth-scroll or not. It's simply impossible to get to the source code to do that unless you're an employee–and even then it would probably take six months to get that feature in. Whereas within Smalltalk, you pop up the menus, accept the change, and it's working right now. You can't even do that today.
(from Valley of Genius: The Uncensored History of Silicon Valley)
It's interesting how the ideas of the Smalltalk and Lisp systems of the 70s and 80s get reintroduced over time, but they're never fully embraced, even though some of their features make it into the mainstream. Emacs would be an exception. Makes you wonder where Smalltalk-inspired systems would be today if they had the kind of support popular languages have received. Imagine Smalltalk receiving the kind of attention Java, Python or Javascript have the past 20 years.
A worthy mission, but if the best examples of it are emacs, HyperCard, and a bunch of academic papers no one wants to read (I say this as a former HCI academic :), the road ahead is long and arduous indeed.
It’d be nice to see the authors of this manifesto make a small program (image editor, chat client, etc) that embodies the values they have in mind.
This is the vision that Seliom, the startup I am working on, has. We are still too early and don’t have documentation to share, but the idea is to give business owners the power to easily change digitized processes in minutes.
My question is: are processes the wrong focus? Given a complex problem domain (for example a payroll system), isn't a physical model of the problem domain itself better than an implementation of processes which are hard to determine and subjective in nature? Perhaps another way to put is to question the degree to which which a payroll system might be built on the back of a series of state machines that are often associated with business "processes".
Perhaps such a complex problem domain is a bad example and it is intended to expose processes in some other context?
This is a great question. Our approach here is that there are certain processes that can be easily modeled and digtized, and others that are not. For those that are not (a complex payroll system), we aim to provide a more "informal" way of managing them. That is, there are a series of steps that need to be made, but you can collaborate on top of these steps in a more informal manner (by commenting, tagging people, uploading documents, etc).
We have found that even in complex processes, there are steps that need to be followed. For the uncertain aspects of that process, it is best to just describe it at a very high level and use the informal collaboration I just described.
When I read the concerns addressed here I cannot help but think of the end user (not a developer per say). There is an entire category of EUSE products that try to accommodate these concerns, however fall quite short. We need to rethink the entire paradigm of development, define sectors that need specialized product concerns (embedded vs games vs cloud) and build holistic ecosystems around them. There are startups doing this, yet how to categorize these types of tools is still not established; therefore hard to point at.
Good luck! Are there any examples of this idea in action? I didnt find any references on the page.
I think this idea has potential: take the bash shell as an example - it is a terrible programming language with one redeeming feature, execution tracing (set -x). Whenever i have to work with an unknown bash script i can turn on that option and see what it is doing. I think such a level of transparency is a first step towards the ideal of malleable systems.
You are possibly doing better than most engineering teams if you can achieve this within your team. This takes a lot of empathy And trust from the writers of the software and the users. The teams that move the fastest are those that can use each other’s work. The teams that move the slowest are like government - each one re-writing the progress made by another as years go by with things mostly unchanged on the whole.
You are right, let's make defibrillator firmware magic numbers into dynamic variables you can set from a numerical input element on a publically accessible web page.
That'll learn 'em.
...or perhaps consider limiting the scope* of such advice...
disclosure: I know nothing about defibrillator firmware, but I'll wager that it has no malloc nor any embedded javascript engines...
*perhaps software intended to be modified or customized or modular etc.
What is left out of this entirely is that “composability”, sometimes, comes at the cost of efficiency or user experience. Companies like Google try to create each new product fit within Google infra and then end up with a less intuitive product. The reason many startups succeed in creating better solution is that they don’t have the baggage of making things malleable and composable.
“Software” shouldn’t be anything. Software is so broad a generalisation as to be utterly meaningless.
Should some software be easy to change? Of course. There is immense proven value in that approach especially in the “unix tools” philosophy.
But you cannot reason about embedded medical device drivers the same way as you can about a desktop CAD application or a video game or a massive SAAS platform.
My guess is there's a lot of companies that would do very very well, if we opened up software & made it flexible.
The entire era we live in/just left, the PC era, happened because of Adversarial Interoperability. Massive economic activity was created because "real economic activity" had "reduce[d] its power and influence", as PC-compatible systems emerged & created a robust, fast growing, competitive market place.
Did IBM want to? No. But enough legal space was still there to allow companies to create. While IBM could patent inventions, they could not own the idea of an addition ops code to add two numbers.
I'd content that the lack of Adversarial Interoperability, this low standard we've sunk to in software world, is holding back vast seas of economic activity. We cannot effectively build new better more interesting, systems, when each piece of software is a closed, inflexible box. We've got to open up to possibility again, just as PC-Compatible opened the doors of possibility that brought Very Large Scale Integrated circuits & systems to us all.
I feel like I could spend infinite amounts of time coding and recoding to make things easy to change ... and not get anywhere / add a lot of unneeded abstraction.
I've certainly run into plenty of hard coded narrow code that is brutal to change, but the changes I make are were probabbly not obvious to the guy who wrote the code in the first place.
The history of software is full of failed projects trying to make users into programmers. It simply doesn't work. Thinking abstractly/systematically about automating processes is a skill that is hard to learn, and customers are busy enough taking care of just running their business.
The idea of Algol was like that: a standard language to describe algorithms. Mostly for publishing, where it served in this role (for ACM) for about 30 years.
I myself like the idea, but looking at how things develop in general I have doubts. Such a language must be austere, but who wants austere?
open ended thinking on a framework level is basically R&D, many companies don't have money for R&D, they just need something to sell and get paid so you can get your next paycheck, but u can try and find a sugar daddy or live off of foodstamps for a few years and really put some thought into it lol.
Does this article literally mean easy as in hire somebody else to do it for you or does it mean simple as a one step change? By no means is that clear from reading the post.
This subject of easy versus simple is frustrating, angrily so.
I frequently see people asking for things to make various software tasks more easy. When I see anybody ask for easier the only appropriate question is: How easy does it need to be?. The question isn't rhetorical, because I doubt (from experience with this issue) that the person asking for easy has any idea about what the end state of what that easy should comprise.
Here is a real life example:
At work there is a software component that has jQuery 1.x embedded and this failed an internal security audit. To solve for the audit failure version 1 of jQuery must be removed from this application. My first thought is to simply remove jQuery all together. You don't need it, but jQuery is easy. I was denied from removing jQuery, so I have to upgrade it.
I upgraded jQuery and it breaks the application. I tried 3 different versions of jQuery Migrate and multiple versions of jQuery 2 and 3 and no luck. The application is broken.
This quest for easy is wasting an absurd about amount of time, so far a week, and its still ongoing. Knowing this is absurd I took the initiative to simply remove jQuery. I removed the jQuery library (about 12000 lines of code uncompressed and about another 100 lines for AJAX prep in our application). I then added about 20 lines of code for standard XMLHttpRquest handling. The application works perfectly, but fails a code review. We must use jQuery.
I tried to explain this to nondevelopers and couldn't do it in a way that empathizes with the need for easy, because at face value it seems irrationally absurd. No matter how I tried it always sounded stupid. So, instead I have learned to explain like driving a car.
Imagine you are a professional driver. You have been driving professionally for 10 years and have quite the validated resume. You have experience driving sedans, limousines, trucks, and so forth. You can drive safe and fast. You come to the current client highly recommended. A transport service dropped off an extremely expensive super sports car at an import broker and you are required to drive it to the client.
You hop into that driver's seat and are shocked to discover there is this weird stick thing in the middle console between the seats and 3 foot pedals. Remaining confident you put the keys into the ignition but the car doesn't start. You sit there for 10 minutes completely lost. You have never seen a car with 3 pedals or a weird stick between the driver and passenger seats. Clearly, the car must be broken. Never mind that somebody just pulled the car up to you and never mind that its a standard transmission, which is the default in most super sports cars.
Obviously the car is broken. The best recommendation is to investigate the car for mechanical defects, swap out the manual transmission with an automatic and after $18000 and 3 weeks it will be ready to go. Easy, now any qualified driver can drive the car.
Simple, conversely, is just putting your foot on the clutch while engaging the ignition. The car starts without delay and allows for greater speed and precision at no extra effort.
As a web developer most of the software I encounter is easy and extremely fragile, but its certainly not simple. This is especially true when you out grow your giant framework.
This comment probably will be downvoted since it doesn't represent popular opinion.
I think everyone who has chosen a career of a software developer sooner or later needs to seriously evaluate Lisp and Lisp dialects - Racket, Clojure, CL, Emacs Lisp, Fennel, etc.
Lisp has certain qualities that most other languages don't (perhaps except for Smalltalk).
I have personally learned, tried, and implemented working software in numerous different programming languages. I'm not trying to brag about it, that's not the point. The point I'm trying to make is the following:
I've seen many programs - systems, small, medium, and of significant size, and honestly, only systems written in Lisp can give you this genuine feeling like you are dealing with a garden. Even in systems where things get incredibly messy, you still can organically keep growing them and keep adding more and extend existing stuff. They usually do not crumble under their weight.
Take, for example, Emacs. Arguably it is probably the most extensible piece of software that humanity has ever created. It is wildly malleable. Things you can do with Eisp are borderline insanity; you can change things on the fly without even having to restart Emacs - sometimes, it feels like some sorcery. Last time I checked - I pull something over 400 hundred different packages in my configuration. And somehow things just work. Yes, from time to time, "abstractions start leaking," and things would break, but never to the point that it becomes completely unusable. Check out GitHub stats. Github alone contains an incredible amount of Emacs lisp. A language with one sole purpose - configuration. Basically, it is a glorified YAML for Emacs. And the people who wrote that code - most of them never had met each other. They have never worked in the same org. They didn't have to follow strict guidelines or standards. It is a messy, wild, and lawless world. Emacs ecosystem defies any logic. If you think about it - it should not work at all, yet it does. And it's been working for over 40 years.
Many people. Some brilliant individuals every once in a while would claim: "Well, Lisp is just old. Give language X a few more years, and it gets the same and even better features..." And that's been going on for over six decades now. Ironically, almost every single language that is on top of TIOBE and RedMonk today, has borrowed many ideas from Lisp, except the most fundamental one. The concept of homoiconicity. The benefits of homoiconicity are incredibly underrated.
You may dislike or even hate Lisp, but trust me, years later, when Python is no longer sufficient for our needs; when Java becomes too expensive to maintain; and Javascript transforms into a drastically different language - Lisp still would be used, and systems written in it would be thriving.
Learn Lisp. It will make you a better programmer. That is guaranteed. Don't take my word for it. See what the world known CS experts have to say about it. Not a single one has ever said: "Lisp is a waste of time."
The lisp/smalltalk style of computing was meant for a world in which there was not such a strong divide between users and programmers. We did not get such a world, and our consumer systems have taken on (an actively promote) the opposing view. To me it's not necessarily about the popular language du jour, or whether or not a certain currently embraced language will last the "test of time," but rather how long our current short sighted computing culture overall can last.
> how long our current short sighted computing culture overall can last.
Well, we can keep adding new programming languages and keep "extending" javascript (borrowing every single good and a bad idea from those new languages) for a very long time. Today the demand requires narrow-focused specialists with a median tenure in a company of 2.5 years. They don't even want "front-end" or "back-end" engineers anymore, they want "React", "Angular", "Flutter", "GraphQL", "Kotlin", "Swift", "Golang" etc. engineers.
Not too many companies see value in generalists. Those who can look at the domain and design business logic, funneling data from back to front, to mobile apps and back to the API. And it's not so simple anymore. Everything is getting unnecessarily complicated.
Yeah stagnation can last for a long time, even as the structural aspects of that stagnation become overwhelmingly apparent. We are in the "Brezhnev Period" of computing in that sense
Err I would think the likes of C/Python/Java still being out there with jobs available in some decades are much higher than any Lisp language.
In fact there are probably more Cobol jobs than Lisp jobs today. After having played with Clojure (as in, read the book, and did some systems) I still don't know what the kool aid is about.
Maybe is because I have been doing python for a long time...
Jobs. You can build a house employing a crew of contractors. But would you let them redesign your garden? Perhaps, you wouldn't. They'd destroy it by walking around in their heavy-duty boots. There always will more jobs for contractors and fewer jobs for gardeners.
Going back to using Emacs as an example: You know that no one, not a single developer involved in Emacs, has ever gotten paid for it¹ to do it? There are precisely 0 jobs on the market for working on Emacs.
However, I agree with you on one thing - Lisp is a hard-sell if you've never done it before. It took me a few years to finally understand what the "kool-aid" is about.
I don't think I understand what you are trying to say :)
Either you're saying that the real garden designers can only use Lisp as a tool or you're saying that no other popular OSS projects have been mostly developed by volunteers.
Both cases aren't true so if you're trying to say something else then please elaborate.
I'm saying that measuring the success of a software language/methodology/tool in the long run (decades of time) based on the popularity of it at any given point in time is useless.
Also, I'm saying: Lisps allow you create "gardens", ergo would always require fewer people to build, grow and maintain projects. And in the long run, ROI from choosing Lisp would be much higher, even though languages that typically require more people would be always more popular.
It's a known fact - to build and support big Java projects you need to have in order of magnitude more people than to build similar projects with the same set of features in Clojure. ROI from choosing Clojure is higher. Although Java "generates" more jobs.
Disagree with me on what? That Clojure typically requires fewer people for projects of similar size?
There are plenty of companies building awesome, successful projects in Clojure today with teams of fewer than 10 engineers in them:
CircleCI, Pandora, Clubhouse.io, Walmart Labs, Apple, Pitch, Cisco, Gramarly, Ladder - these are just a few that come to mind.
All the other things I mentioned previously are simply facts, except the main point: "I haven't seen systems, built in other languages that are sufficiently malleable." That is an opinion.
I yet have to see a successful, big Java/C# project that withstands the test of time and built with a team of just a few engineers.
Yes. Clojure is just a tool. A programming language. Just like Emacs Lisp. And Emacs Lisp from a technical point of view is one of the worst programming languages you can think of:
- It has no static types
- Has no namespaces
- Until recently it didn't even have lexical scoping
- Its execution time is painfully slow
- It's a Lisp. The syntax of which many engineers describe as "harder to read"
- Majority of Emacs-lisp packages (like over 95%) don't even have any kind of automated tests
You pick just any kind of programming language with these traits and sooner or later any significant project would be buried under its own weight and would have to be re-written in a different, more modern language. Like I said before: "Emacs ecosystem defies any logic - it shouldn't work at all, yet it does."
Modern programming languages emphasize the need for IDEs with extensive re-factoring/re-structuring tools. Lisps somehow for decades didn't really require them. The only thing they need is structural editing tools and even that is an optional gimmick.
You ever thought about why no-one successfully was able to replicate and re-build a fully-featured clone of Org-mode outside of Emacs? There were several attempts, all of them are pretty lame (compared to one written in emacs-lisp).
You ever thought why every single attempt to re-write Emacs or create "a killer of Emacs" was unsuccessful?
I understand your skepticism. And I'm not selling this as some sort of "universal truth", I'm just telling you from my own experience. I am not one of those Lisper zealots who learned it years ago, I discovered it fairly recently. I avoided Lisp for a very long time. I've seen big projects in many PLs - ranging from Turbo Pascal/Delphi to Java, Python, etc. And I've seen big and really badly structured, extremely messy Clojure codebases.
There's something about Lisp that makes it extremely suitable to create big, malleable systems that can grow indefinitely. Something that I somehow never experienced with other languages.
The Malleable software principles seem to be more of a "No Code" / "Low Code" wish list with some similarities to the Four Freedoms of the Free Software Foundation.
>> "Software must be as easy to change as it is to use it"
Easy is relative. I find command-line interfaces remarkably clear and easy to work with, but others might not agree with my opinions or preferences. How easy is "easy"? How do you know if software is easy enough? How can we fundamentally simplify the essential complexities of software development so that it is approachable by "the man or woman on the street"? What incentives do we have to do so?
>> "All layers, from the user interface through functionality to the data within, must support arbitrary recombination and reuse in new environments"
>> "If I want to grab a UI control from one application, some processing logic from another, and run it all against a data source from somewhere else again, it should be possible to do so." source: https://malleable.systems/mission/#2-arbitrary-recombination...
If you have the software source code then this might be technically possible, but licensing restrictions may come into play. Short of the source code, the "no code" / "low code" software development pipe dream needs to become a reality and then become standardized to allow for component interoperability. It might be possible, but the question of incentives comes up again. If I were a company selling a production-quality "no code" / "low code" system, why would I make it interoperable unless I was trying to undercut a competitor?
Compare to the FSF Freedom 1: "The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this."
>> "Tools should strive to be easy to begin working with but still have lots of open-ended potential"
How do you define or measure "open-ended potential" from the user standpoint?
>> "People of all experience levels must be able to retain ownership and control"
Does this imply some kind of built-in DRM or is it merely copyright and licensing? How would ownership and control be enforced?
>> "Recombined workflows and experiences must be freely sharable with others"
This nests pretty closely with the FSF Freedoms 2 and 3:
"The freedom to redistribute copies so you can help others (freedom 2)."
"The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this."
How does this principle combine with the previous principle about retaining ownership and control? How is it different from copyright and licensing that is currently used?
>> "Modifying a system should happen in the context of use, rather than through some separate development toolchain and skill set"
Using software and building software are different activities that REQUIRE different knowledge and skill sets. This is roughly the equivalent of wanting to modify your car without understanding how the drive train or transmission work. If you do not understand the underlying natural laws that govern how something works then how can you expect to make changes that do what you want? Lack of understanding leads to cargo cult reasoning and magical thinking.
>> "Computing should be a thoughtfully crafted, fun, and empowering experience"
It depends on the person, but actually learning how computers work and how to program computers can be a fun and empowering experience.
>> "A fundamental problem of today’s software ecosystem is that we do not own or control the software that runs on our devices. In addition, much of the actual processing logic has been passed off to remote systems in the cloud, so only the inputs and outputs are observable."
>> "We must ensure that malleable software approaches allow the customisations and personal workflows you create to be owned and used as you see fit. If an approach relies on some amount of remote functionality (perhaps to assist with pulling apart an application or service), we must ensure there’s a clear path for anyone interested to keep those dependencies alive so that their workflows are not disturbed if the remote service were to shut down."
>> "This has many parallels with the ongoing movement towards data ownership, which is gaining popular awareness. Although the data ownership movement typically focuses on identity and social data, the programs and customisations that authors create are personal creative expressions. Authors must retain ownership of their data, programs, and customisations just as anyone would expect to have control over a book they wrote or art they created."
The first paragraph definitely matches up with your point that the user should be in control of their computing environment. However the third paragraph certainly seems like copyright and licensing are implied.
As you’ve surmised, the user / workflow author is meant to be the same person.
When it says “authors must retain ownership”, this was mostly meant as a reaction against platforms like IFTTT and marketplaces like browser extension galleries that may bring sharing restrictions, vendor lock-in, etc. So it’s trying to argue for more openness at least being available as an option to the hybrid user / workflow author.
For cases where the user / author wants to be restrictive and lock down who can use the work, the mission is mostly silent on this front.
Personally, I would hope most would see value in openness, and known approaches for enforcing restrictions (like DRM) are quite ... unpleasant.
I am hopeful better schemes can be created for those who want such things, but it seemed a bit beyond the core focus of the mission (which is already quite ambitious). Thanks for the feedback!
I wouldn't go as far as to say it's (or should be) "as easy to change", but making compilers that allow for modification is not novel or unusual: Clang lets you load optimizer plugins written in C++, for example; Rust lets you write macros in Rust itself that give you access to the token stream, and Lisps, of course, are homoiconic.
Of course it can. You can do pretty much anything you like with software as long as you have the vision. A plugin/extension architecture can certainly make modifying a compiler easy. What makes you think it can't?
Except it doesn't work like that. Sensible architecture doesn't make compiler engineering trivial.
Compiler engineering is a highly technical area of software engineering. Think an average C++ programmer could take an old C++ compiler like OpenWatcom and make the necessary improvements to make it comply with the latest C++ language standard? Not a chance, that would be the work of a lifetime. They might have a shot at making small contributions to a compiler, but they'd have to set their sights quite low.
Another example might be operating systems. The abstractions offered by modern operating systems are terribly easy to use, but there's a high skills barrier to doing meaningful work on the kernel. Similarly, the average web developer stands little chance of making a contribution to V8 or WebKit.
This strikes me as analogous to discussing whether it should be harder to design and build a car, than to drive one. I don't like to be uncharitable but I stand by my earlier choice of words: this is silliness. Difficult engineering fields cannot be made easy.
I see that another commenter, 'gm', has done a good job expressing similar points.
FWIW, I consider posting a link that has a title matching the point you're trying to make, without further clarification or constructive feedback, fairly lazy.
My effort matches that of the person I was replying to. But yeah overall a bit out of character for me. Extremely polarizing examples to deter something could be progress is a pet peeve.
When possible, I think it's nice to aim for something better than that. While brusque, I don't think I the comment was in bad faith, so I think it deserved better.
I agree with this in theory, but doesn’t this go against YAGNI, agile, etc.?
In most commercial applications I’ve worked on, supporting every type of interop or integration would never fly from an iterative development perspective. Or perhaps I’m misunderstanding the thesis here.