Picking the proper names for software constructs is also part of that category. I'm always impressed at how choosing a slightly wrong-one can have a huge impact down the line. Somehow the brain doesn't map the name exactly to the current context and gets things wrongs, prevents clear thinking.
A good example is the RAILS_ENV environment variable. It's a mechanism to select a running mode of rails applications. Now unlike most applications debug mode is the default. It's called "development" and the normal mode is called "production".
Because of these naming decisions developers always confuse the RAILS_ENV to be related to the deployment target. They add a "staging" running mode and they start bundling configuration secrets into the code. And invariably these secrets get leaked on github when they decide to make the repo public.
For a long time I felt something wasn't right and couldn't point the finger at it and it took conscious effort to disambiguate that. And that's just for a single variable. Mind your naming :)
>Picking the proper names for software constructs is also part of that category. I'm always impressed at how choosing a slightly wrong-one can have a huge impact down the line.
Similarly I'm often shocked at just how critical name disambiguation is. There's a shocking number of extremely similar concepts when writing code which, when given the same name ends up causing bugs.
A good example is 'file' which can be used to refer to a file handle, file name, file object of some other kind, a 'file' representing an aggregate of information on person, and often it ends up referring to directories, devices and links as well.
Absolutely on naming. The python module "optparse", which was standard for quite some time, did not have any way to specify that a flag must be passed. As soon as the newer version "argparse" came out, this could be specified. All the documentation pointed that "mandatory option" was a contradiction, not realizing that they were the ones who had labeled it as an "option" in the first place.
We are in precisely the middle of this. We have documents in our product that encapsulate an essential step in a workflow, but for some unholy reason, someone decided to call them "items."
Now day in and day out, I have to deal with classes like ItemViewer, ItemListController, etc etc, and half the time I have no idea what the responsibility, location or relationship of any given class is supposed to be. So much time just getting contextual background before I start working in an area of code. Couple this with early bad coding decisions around encapsulating responsibility and lack of testing coverage and I've got a relatively miserable 2-3 years ahead of me.
An "entry" at least implies there is a list somewhere (as would "element"). Both "item" and "entity" are more like "thing". I could see adding "object" to the list.
"entry" is a fantasticly sinister name if you're writing the sql directly in your application though. object would be hilarious in java assuming the compiler lets you do that.
Indeed. For example, I am deeply amazed at how in Flux and many of its derivatives Actions are not executable behaviours, but messages describing an action to be performed, while ActionCreators are the actual, well.. "actions".
I've thought about this particular issue quite a bit as well, but I haven't been able to come up with a great solution, because RAILS_ENV/RACK_ENV is ubiquitous and auto-detected by tools. So you can have multiple deployment targets running the same mode, but then you can't differentiate them by default in tools like New Relic, Slack, Honeybadger, etc. You can put a lot of effort into building custom solutions to this, but since there is no standard it's easy to run into brick walls. In the end I've found it to be more work than just adding a new RAILS_ENV setting across all the config files. Ugly but pragmatic.
I would encounter this with the graduates of these 90 day coding schools when interviewing them. There would be total shock and incomprehension when I asked them to click through their github and revealed the secret key they'd checked into the public repository....
1) The OpenGL standard specifies functions that are capitalized like that, as "renderbuffer" is a noun.
2) NUL is a character, while NULL is a pointer. These APIs are using the correct name for each scenario.
3) Checking if the user is a monkey (or now a goat) is a running gag. I don't think that is a naming mistake.
4) The usage of "exception" in that name is clearly different each time, though maybe it requires knowing Objective-C for that to feel "correct" (as the developer who named that function clearly is writing Objective-Java there ;P).
The spelling mistakes and inconsistencies are "awesome", though :(.
(I wanted to add this to my other reply, but it's too late for that.)
Is there a method naming convention in Objective-C where the phrase "withExceptionXYZ" in the method declaration means "this method may throw an instance of ExceptionXYZ"?
I parsed saurik's comment as a joke based on Objective-C enjoying long names containing the word "with". Objective-C is relatively unique among programming languages in continuing the method name when specifying parameters, and "with" is a usual way of specifying them. For instance, one of the constructors for NSString is named "stringWithContentsOfFile:encoding:error:", and the way you'd call that method is something like [NSString stringWithContentsOfFile:@"file.txt" encoding:NSUTF8StringEncoding error:NULL].
However, in this context it's not used that way, because you're not taking an OperationApplicationException. Instead, yes, this appears to be a variant of readExceptionFromParcel that instead sometimes throws an OperationApplicationException. That use of the word "with" would not feel correct in Objective-C -- nor would the use of an exception, probably, since they are rarely used in Objective-C.
I think the term 'Conceptual Debt' is not helpful. The reason why 'Technical Debt' was coined was to help non technical shareholders understand that although software may work as designed to the user there may be unseen technical trade offs that have a real measurable negative impact in the long run.
What this article describes is 'just' bad product design, the application works as designed but the end users are confused by the product.
The distinction is not only helpful, but fundamental. You can have clean, beautiful code that implements poorly-named, ambiguous, or half-baked concepts -- that's conceptual debt. You can also have a well thought out product, broken down into simple concepts, that is implemented by inexperienced programmers as a mess of speghetti code -- that's technical debt.
Granted, the two often go hand in hand, but not always, and even when you have both, it's helpful to separate the two, as they require different remedies.
Isn't the concept of "Conceptual Debt" antithetical to the idea of MVPs,lean startups and the "ship fast, ship often" mantra dominating the tech culture nowadays?
No. Start-ups (and anybody else for that matter) just make certain compromises in order to ship fast(er). They still create conceptual debt, just as they do technical debt. The key is to go back and address both before they become uncorrectable (or, at least document/understand those compromises, so future developers don't waste time figuring it out).
Bad product design = conceptual debt. It's a nice way to be able to say the costs you incur from making bad design choices early on, that now need to be dealt with and can be fixed.
But the concepts are not only in the product design, but also reflected in the software system architecture, since that will reflect the design.
One weird thing that happens is that developers insist that there is some sort of technical constraint, and then the product design reflects the software system architecture, rather than the software system architecture reflecting a product design well suited to users' mental models.
The debt part implies that it can be paid down, which can be applied to technical debt. Concepts will tend to stick (they are often at the very core of the application), so I am more inclined to agree that "bad design" is more appropriate than "conceptual debt".
I'm dealing with this at work in an odd way. The users of the product are machines, as it is a predictive inference system. Humans will never directly interact with the product. So if you think about it, the input data is the user in my group's case.
Due to the failure in recognizing this twist the decision was made to have a very restricted way of representing features (metadata). The concepts in code base are all linked by the restrictions. There have been a few painful refactorings in the past year, and I don't see an end to it unless those restrictions are removed. So this phrase extends well to systems without human users.
Although like others I do agree that "bad product design" is probably a better term, but "conceptual debt" sounds more sophisticated.
I was included this "conceptual debt" in the concept of "technical debt" when I think of it -- It didn't occur to be compose it down further and so I agree; I'm not sure it's helpful to do so.
There's also a problem around incorporating this notion with the notion of agile product development. After all, how are you supposed to conceptualize a product around user expectation if you don't know how users are going to, well, use it?
The article, which I liked, is very end user-oriented, but I think that the ability of other programmers to interact with your conceptual models is just as important. Other people have to work with your code, and their ability to develop new features will very much depend on how easily they can work with your conceptual models. Onboarding programming hires into a poorly organized conceptual model will be a mess, and disheartening to the new person.
Good conceptual models can be extended, interacted with, and built upon, and that which an end user may regard as a "feature," including features not yet thought of, flow naturally from the best models. So there are some knock-on benefits to the end user from good conceptual models under the hood, even if the end user doesn't see them or have an opinion about how intuitive they are.
When I first read the article's title, I thought basically the same thing: this is about the concepts the programmers embrace. But it turns out to be about the concepts the end users must embrace.
It's unclear to me how those two things are related. They _can_ be related, but it doesn't seem like they have to be. Not sure.
I think they don't have to be related. The article's example of tags and folders is a good illustration. "Tag" and "folder" models might have a clear enough implementation in the code; a programmer might wonder why they're both implemented but may have no trouble supporting both in principle. But the existence of both may be very confusing to the user.
The cost inflicted on the coders is not coders' conceptual debt, but users's conceptual debt - certainly costly to coders as they have to support multiple patterns, but I think there is a difference. I guess when I think of coders' conceptual debt, I'd think of something that may be abstracted away at the UX level such that the user doesn't see it, but inflicts pain on implementors due to counter-intuitive patterns.
>Technical debt happens when you make mistakes like choosing the wrong database or programming language for the task at hand
Technical debt mostly means duplicated code, tightly coupled code and code that isn't programmed to fail fast. Those things can be impacted by the wrong choice of programming language or database but only indirectly.
Most of these things are pretty much unavoidable the first time you write code and you can only fix them once your code is surrounded with tests.
>The main issues in the case of technical debt are that the product is running slowly, not scaling well
Again, not really. The main issues are that the code gets buggier (owing to it being harder to reason about) and development gets slower.
That's how I think of technical debt too but I think it's worth considering database choice. When you start building an app, you'll choose postgres or mysql or another easy to get started with db. If you're ever fortunate enough to attract enough users to push beyond what those dbs can handle (or even require you to shard them), I think you've ended up in the relm of technical debt.
You took the easy path early on because it let you focus on acquiring customers and then you had to improve it later.
Assuming the code is well factored and the database access code isn't littered all over your code base. I'm not sure I would consider this technical debt.
Also, if Twitter and Facebook started out building their product to be used by millions of users, they would have gotten nowhere. They would have 0 users right now.
You can't really say Facebook took the easy path either. They went as far as creating a virtual machine and dialect based on PHP. And who knows what they had to do to get MySQL to scale like that.
I think this article is completely stupid, and is "conceptual debt" in not understanding fully what technical debt actually is. This article describes BAD PRODUCT MANAGEMENT.
DEBT is an instrument. A good CTO/Developer will know when the codebase/product/platform is taking on technical debt. The same way you might take debt for a house, and pay back over time so you can LIVE in it.
Shitty CTO's/Developers will take on tech debt without knowing it, and soon enough you're fucked. That's how people become homeless in the real world with real debt. (Well one way)
There is NEVER a time when you want to take on 'conceptual debt' AKA make crap product choices. That is just being stupid or short sighted. The only time to take on conceptual debt is if I am a development agency being paid by rich stupid people, and I want to make as much money as possible and not care about my reputation with them.
We did a "team building" exercise once. The scenario was that our plane crashed 10 miles from a town in the Canadian winter. We had to prioritize 10 items from about 25 that we had on hand. Of course some groups decided to camp and wait for help, and others decided to set out on foot. People decided the usefulness of the items and made list. When it was all done, the answers were compared to those of a survival expert - who incidentally said we'd freeze if we tried to make the journey to town.
I made the point that the "correct" list of items was strongly dependent on which top level strategy was chosen, and that this was a reason for solid planning from the top. If your product is holding off on big decisions because the leaders can't make them, you'll be working toward multiple possible outcomes at the same time and burning a lot of extra effort.
I've often heard the idea that “the way the application is coded shouldn’t dictate its UI” which is true, but leads people to think that the application architecture should be divorced from the UI, and I don’t think that’s true either. You should first figure out the intended mental model of how the application works, and then from that should flow both the UI and the application architecture.
In my experience, apps that start code-first tend to end up a CRUD apps and require a lot of composition from the end-user; apps that start UI-first tend to generate a lot of exceptional use cases and workarounds. Both give the user a muddy mental model of how the application should work.
Any project that starts developing UI separate from code (in whatever order) fails at the fundamentals because there are interacting constraints, and both must fit business/political constraints as well.
I used to be a heavy Flickr user and I remember clicking around one day and thinking, "I can visualize the sql calls being made to generate these pages". That didn't feel too great.
File system storage is much easier to reason about if you can't lean on a library, you can simply use the underlying OS. HN was built with a homegrown Lisp called Arc, and so can't rely on a vast community to provide a proper ORM.
Community support is why I write my personal projects with Ruby and not something like Arc. I don't want to have to reinvent everything just to get something basic done.
I think the term he's struggling toward is something like "user experience debt" [1] or "product design debt" [2], an area that has been discussed for years.
I think "conceptual debt" is a poor choice of phrase here, as one important kind of technical debt is the sort of software design debt where your domain model ends up being a poor fit for your domain, often because the domain concepts themselves shift. (For those interested, "Domain-Driven Design" is a great book relating to this [3].)
I also find the "worse than technical debt" headline irritating. It's the sort of, "the thing I specialize in is way more important than the thing you specialize in" thinking that is poisonous in a team environment. Which one is actually worse depends a lot on your product and your business conditions.
Thanks for the recommended reads, enjoyed them and they're hitting on really similar notions. Well cited comment :)
Conceptual debt is product design debt like you're suggesting. In particular, I think of it as a subset of product design debt that has to do with domain modeling as you suggest vs. having the write domain models but having poor user flows around those core concepts.
Unlike user flows, concepts are also reflected in the API & Object Models in your codebase so those may be trickier to change than the user flows that revolve around them.
Design Debt is not a new concept. It is a common cause of product failure. Taken from the link above:
"'Design debt' explains the problem. When a team is working under pressure, they take shortcuts that compromise design quality. It's like taking out a high-interest loan. The team gets a short-term boost in speed, but from that point forward, changes are more expensive: they're paying interest on the loan. The only way to stop paying interest is to pay back the loan's principle and fix the design shortcuts."
Except that your link explicitly states that "design debt" is also called "technical debt." This piece is trying to make a distinction between technical debt and something else.
To me at least, what OP described as "conceptual debt" is really just design debt. Which all seemed like down to earth technical debt. I honestly see no reason to distinguish the three. Plus they typically stem from the same sources - that is, enterpreneurs and managers that want an MVP without actually grokking the V bit in the latter acronym, and managers who cut corners to slash short-term costs.
By contrast, technical debt is very different from, say, organizational debt. Which arises when your organization's staff grows faster than it's able to organize the workload around new hires.
Honestly, all the conceptual debt I've encountered has been at the hands of bad product managers. Product managers who have no grasp on how users are using the system, don't talk to users, and just blindly create features for features-sake.
This is absolutely wrong in practice, in my experience. Non trivial conceptual issues are very often raised when trying to implement things. Only then do you start to realize you're starting to code the same thing twice, or that you've got many unhandled behaviors, etc.
Bad conceptual design is due to poor communication between end-users,product owners, and coders.
The reason a product owner can't handle all conceptual issues in advance is because he speaks english, and not logic. His terms are poorly defined compared to the need of a mathematicaly correct definition of his problem space. Only when you start to code things do you realize that things are much more complicated ( or can be simplified).
The reason a product owner can't handle all conceptual issues in advance is because he speaks english, and not logic
Is there a reason the product owner can't speak both english and logic? It seems like establishing mental models of how the application is working requires at least some level of logical thinking.
In the past I have worked on projects where the product owner wasn't thinking only in terms of output and not how the application achieved its goals. The result was an initial overly simplistic happy path design, followed by developers poking holes in it, followed by the product owner adding a series of exceptions to the design to try to handle the edge cases. Neither side felt ownership of the mental model, and it turned out to be a mess.
"Is there a reason the product owner can't speak both english and logic?"
Theoretically no, but it's a big ask for one person. In reply to this entire chain, I'd suggest that it's actually something for the technical lead and the Product Manager to work out in conjunction with each other. For any team past about 4 people, it's just going to be too much to expect someone to both be the technical lead and be in contact with the users enough to be able to make those decisions. (A bit of crosstraining may be good, but trying to avoid that specialization is probably asking for trouble.)
If the PM and the lead don't respect each other enough to make that work, you've got a people problem. There's a lot of people-problem ways to muck this up, unfortunately. (Most obviously, I'd contend that making the PM be above the tech lead is a very common failure case. If you can't trust anyone on the engineering team to have a roughly equal voice in the product design process... uhhh... that's a pretty big problem on its own!)
Oh god, the happy path fallacy. I've seen this too many times. Product manager comes up with conceptual model X, once implementation starts people it's brought to attention that X will only work under condition A,B,C, which together only occur for 5% of the users. Instead of reverting X, and trying out Y instead, the heavy train to finish X has already gained momentum and exceptions are added into the UI breaking the whole flow required for X to be intuitive.
The sad part is that these unreasonable conditions are usually quite obvious to spot early on. I'm just making things up now but it can easily be things like requiring the user to hand over their bank log in details so that random startup can analyze their expenses, analytics that require write access to some big corporations sql database or that every user in the whole world uses Outlook 2007 and are willing to install our plugin.
By speaking logic, i meant using a rigourous formalism to express processes and behaviors. Things like state diagrams, or code.
But many specifications today are written in terms of user scenarios, or general definition, in english. A coder would fall in the same trap should he use the same formalism, only he would probably feel the need to specify its problem using rigourous design tools.
I've seen products marginally succeed with major conceptual and technical debt. But in the long-term, it's always bad news.
It's more difficult to compete because you can't easily move forward without breaking a bunch of already existing features and it ends up costing the company lots of money because of the extra time needed to complete anything.
I think the worst case I never saw was an app that was built by overseas developers. The boss, who knew nothing about technology or software, just wanted something cheap and fast. When I came on board as a consultant, we had two parallel apps (the new one and the old one) and any new feature took 10X as long.
Customers slowly left to competing apps that could move much faster and the boss ended up letting all of the US-based developers go. She told me in my exit interview that she could hire 3 people in India for what she was paying me.
I suppose it was the best thing that happened to me because it pushed me to work on my startup full-time.
The analogy with technical debt is poorly conceived. :) Technical debt (as with financial debt) should be a conscious trade-off; borrowing time now that will be paid back later. There's never any reason to choose a poorly modeled UI.
tl;dr: "Conceptual debt" is what Ward Cunningham described as "technical debt". That thing you call "technical debt" is just a combination of bad habits, haste and apathy.
Eh, I think terminology can evolve. Technical debt can also mean incidental complexity explicitly and consciously being introduced at the code level for very good reasons. And paying down the debt means code-level refactoring, introducing abstractions, unifying interfaces etc. It can also mean throwing the code out since the experiment was a failure. (One thing often overlooked on incurring tech debt is there are cases where you can just 'write it off', when the thing you thought was going to be important turned out not to be, so you can just kill it.) The opposite of incurring technical debt can be over-engineering, for example.
Personally I find it an improvement to have two terms that separate the concepts, if for no other reason that it underscores that 'conceptual debt' is a thing that matters a lot and isn't just 'the engineers can come in and clean it up later' type of deal, which technical debt often can be. Recognizing conceptual debt as something that all team members can contribute to, not just people writing code, is a profoundly different type of conversation than the ones you'll have today around tech debt where it's assumed a) stakeholders can't understand it and b) it's nobody's problem but the engineers: they create it, they pay it down.
The technical debt referred to in the article is inadvertent (i.e. bad) technical debt. Ideally a good code base should have none of that. Intentional technical debt on the other hand (imperfectly factored code produced with the purpose of getting something out the door now rather than later) is a totally different animal and is a "finance" instrument that can be used responsibly.
That's how I've always understood technical debt as well[0]. Which database or programming language isn't quite it.
When a major customer needs a new feature, do you hack that feature into the codebase to save the account? Or do you risk the account in order to add the feature The Right Way?
This. It's kind of sad to see a new term invented for what Ward called 'Technical Debt.'
I think that we arrived here specifically because we don't have a term for the effect of careless code change. 'Technical Debt' ended up being appropriated as a term for it.
Agree that 'technical debt' could encompass conceptual debt in its broader definition. But, at the companies I've worked in people often don't use the terminology technical debt to include conceptual design issues. Technical debt discussions often focus on the programming language/database issues. I've found it useful to have a shorthand for referring to unintuitive products and codebases that stem from poor product design choices rather than poor language/tool choices.
Agree with the 'Shipped is better than perfect' and also the notion of isolating the dirty parts.
I've been adopting this rule of thumb that the first pass at a new product I can just dive in and treat it as a throwaway, since I probably don't understand the domain well enough to develop an ideal conceptual model anyways. The key here being to treat it as a throwaway and then return and do a rewrite from scratch once I have a better understanding of the problem.
I agree totally re your point of not understanding of the domain, it can be really hard to make good conceptual models until you've actually tried throwing some code at the walls.
I guess I'd rephrase your point about "one to treat as a throwaway" as saying that you should realistically expect to be moving back and forth between architecting the conceptual model and implementing it. I find it useful to first think about the conceptual model, then write some code for a while, then revisit the model and see what difficulties I've hit and/or what new good ideas I've thought of in the process, etc.
Definitely what you're saying. Thanks for adding even more deeper context. Accepting that back and forth: code, learn, code process.
And to your point too about first thinking about the conceptual model, agree that you should spend at least a bit of time before doing anything else. Thinking through your modeling is something that doesn't take that much time and has such huge payoffs in terms of avoiding pitfalls. But often times it's just funner to start writing code so people do that.
It's always saddening to see programmers just dive in and start coding without planning / thinking about the problem first.
That's often the best decision as long as everybody recognizes that it's debt, which has to be paid off. Just adding more and more debt leads to destruction.
Honestly, that's not quite true. If you told management of a source of funding that could fund a couple of developer salaries, and could borrow at zero percent interest, and pay back whenever so desired, then they would rightly point out that the optimal time to pay that debt off is "never."
The key point about technical debt is that quality metrics need to be tied to actual costs incurred, not just 'the software fails to meet my standards.' For example, 'our customer support forum hasn't been updated in years and has been hacked' is a valid technical debt since your staff is now spending time fixing the forum, at substantially greater cost than upgrading.
But maybe removing all the trailing whitespace and replacing all the tabs with spaces has no business related expense, and rewriting your Django app in Go because 'that's where the industry is heading' is paying off the cheap debt first.
It also depends on the lifecycle of the application. Some applications only exist for 2-3 years, just because of requirements changes; at that point, it's easier to "declare bankruptcy" by deprecating the old code base.
0% interest isn't the best analogy for technical debt, however, because technical debt often carries recurring costs that are analogous to interest. Every time you have to make changes or enhancements to a technically indebted system, the cost of those changes in time, effort, complexity, and bug hazard is higher than it would be in a clean system.
> 0% interest isn't the best analogy for technical debt, because technical debt often carries recurring costs that are analogous to interest.
My argument is simply that if there is no recurring cost, it's not technical debt. It's just imperfect software that still works without demonstrated problems.
Some business situations demand haste and rushing things to market. Those situations are crappy for good engineers and extremely crapoy for those of us who are on the creative side. I would quit any job where the fate of the company depends on rushed delivery. It means the company is driven by low quality and high quantity output rather than the opposite which allows for engineers to improve and elevate their craft.
This is why the API should ALWAYS at least have a first pass before any coding is done. Changing how things look and feel after the fact? Not an issue. Changing the underlying data structures? Holy shit, kill me now. Hell, if your API is solid (and you're using actual restful calls) then you should be able to completely decouple UI coding, business logic, and data management. Of course with the modern love of AGILE (see rapid prototyping) people wonder why so much rework is always needed because that's simply not the case.
This doesn't actually work unless the "first pass" on the API consists of using it to build actual programs, preferably multiple ones of wildly different genres. Otherwise, you end up with an API that is a beautifully designed ice crystal, perfect for the use case that the founder envisioned and generally useless & frustrating for real-world customers. That's exactly the type of conceptual debt that the article is talking about.
Yes. 100% this. The API / object modeling is nearly the spec and the main thing to nail down. On past projects we've used Apiary as a pretty good option for getting a nailed down API Spec before any coding is done: https://apiary.io/. I think here more out of the box best practices around APIs will make these API specing tools even more effective.
I know what you mean about changing underlying data structures too. There are ways to fix the issues after the fact. E.g. wrapping existing concepts in other ones, and then gracefully deprecating the old ones. But, it definitely creates all sort of weird tensions with customers that are using the old concepts.
An interesting side issue here is that bad conceptual model gets the more entrenched the more tests you add. Fixing the flaw becomes almost impossible because it breaks all the tests.
Testing is often presented as a panacea but this is one of the cases where it hurts more than helps.
This is like saying the more developers you put on a project, the more entrenched you become...so developers are not a panacea.
If you need to write quality code, you will need to invest in writing tests. If you wrote the wrong code, yes, you will need to rewrite the tests. But that's a different problem.
I want to be convinced, but I'm not yet. TFA would be more persuasive if it included more than a single "software has two concepts that are effectively identical" example. I'm thinking of screwed-up shit I've seen in the past, and I don't know whether it fits into this or not.
For instance, one company that employed me spent extra person-years creating new service plans because a single internal index in a vendor-supplied product had once been communicated to accounting, and thereafter they had to see it everywhere, and it had to take different values everywhere. Actually I think the accounting department would have had to lay someone off for inactivity if they couldn't spend so much time tracking this one stupid DB key through nineteen different reports. Was that "conceptual debt"? [EDIT: to be clear, "creating new service plans" was a good thing, that allowed us to acquire more customers. If the interface to G/L hadn't been fucked up in this fashion, we could have done that good thing much more quickly.]
When I first read "conceptual debt" I thought it would mean poor architectural decisions as it relates to building software. Of course that's not what he meant. What he really means is what I would call "UX debt" if I were crafting a name for it.
I don't think it's helpful to create a new term to describe something that could be easily encapsulated by using a term that most everyone already knows, is industry standard, and is implicitly obvious to anyone versed in the practice of building software products. Right now "bad UX" would be the same as "UX debt". That terms makes sense to me because it codifies a real concept that I've absolutely held in my mind.
With all due respect to the author, and I applaud his effort to try to improve communication tools, I believe this name has "conceptual debt" :)
"UX debt" feels rather broad - a whole range of tactical issues like color scheme consistency, button naming, inline vs modal editing, etc could be lumped in. I rather like the term "conceptual debt" - conceptual issues at the UX also usually have deep effects on the data model and deeper parts of software architecture. "There's something wrong with the UX of your app", "there's something wrong with the technology of your app", and "there's something wrong with the concept of your app" all register distinct meanings for me, even if there is significant overlap.
I see many Adwords accounts that are structured based on a best practices boiler plate format...but fail to consider how the structure will interact with decision making in the future, forcing poor budget allocation or worse, making proper optimization unfeasible.
Having a very clear vision from the outset for how you will need to adjust and optimize in the future and building an account structure in format that aligns with those processes is often the difference between a very successful account and a mediocre account that only shows minor incremental growth...
Avinash Kaushik wrote once, "spent 95% of the time defining the problem and 5% solving it."
Which phrase sounds more clear to you? Also, which phrase exists for the sole reason of communicating nothing while making you sound more intelligent? I truly hope "conceptual debt" doesn't become a new buzz word.
I had a mild climactic sensation when I read the title of this post. I said the same exact thing a while back to my colleagues regarding some big conceptual errors that were made on a project a year ago and how I believe it will cost us a lot more than a years worth of work to pay this debt.
A good example is the RAILS_ENV environment variable. It's a mechanism to select a running mode of rails applications. Now unlike most applications debug mode is the default. It's called "development" and the normal mode is called "production".
Because of these naming decisions developers always confuse the RAILS_ENV to be related to the deployment target. They add a "staging" running mode and they start bundling configuration secrets into the code. And invariably these secrets get leaked on github when they decide to make the repo public.
For a long time I felt something wasn't right and couldn't point the finger at it and it took conscious effort to disambiguate that. And that's just for a single variable. Mind your naming :)